0% found this document useful (0 votes)
163 views609 pages

Concepts - Kubernetes

lklk;
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views609 pages

Concepts - Kubernetes

lklk;
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 609

7/10/24, 9:28 AM Concepts | Kubernetes

Concepts
1: Overview
1.1: Objects In Kubernetes
1.1.1: Kubernetes Object Management
1.1.2: Object Names and IDs
1.1.3: Labels and Selectors
1.1.4: Namespaces
1.1.5: Annotations
1.1.6: Field Selectors
1.1.7: Finalizers
1.1.8: Owners and Dependents
1.1.9: Recommended Labels
1.2: Kubernetes Components
1.3: The Kubernetes API
2: Cluster Architecture
2.1: Nodes
2.2: Communication between Nodes and the Control Plane
2.3: Controllers
2.4: Leases
2.5: Cloud Controller Manager
2.6: About cgroup v2
2.7: Container Runtime Interface (CRI)
2.8: Garbage Collection
2.9: Mixed Version Proxy
3: Containers
3.1: Images
3.2: Container Environment
3.3: Runtime Class
3.4: Container Lifecycle Hooks
4: Workloads
4.1: Pods
4.1.1: Pod Lifecycle
4.1.2: Init Containers
4.1.3: Sidecar Containers
4.1.4: Ephemeral Containers
4.1.5: Disruptions
4.1.6: Pod Quality of Service Classes
4.1.7: User Namespaces
4.1.8: Downward API
4.2: Workload Management
4.2.1: Deployments
4.2.2: ReplicaSet
4.2.3: StatefulSets
4.2.4: DaemonSet
4.2.5: Jobs
4.2.6: Automatic Cleanup for Finished Jobs
4.2.7: CronJob
4.2.8: ReplicationController
4.3: Autoscaling Workloads
4.4: Managing Workloads
5: Services, Load Balancing, and Networking
5.1: Service
5.2: Ingress
https://kubernetes.io/docs/concepts/_print/ 1/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.3: Ingress Controllers


5.4: Gateway API
5.5: EndpointSlices
5.6: Network Policies
5.7: DNS for Services and Pods
5.8: IPv4/IPv6 dual-stack
5.9: Topology Aware Routing
5.10: Networking on Windows
5.11: Service ClusterIP allocation
5.12: Service Internal Traffic Policy
6: Storage
6.1: Volumes
6.2: Persistent Volumes
6.3: Projected Volumes
6.4: Ephemeral Volumes
6.5: Storage Classes
6.6: Volume Attributes Classes
6.7: Dynamic Volume Provisioning
6.8: Volume Snapshots
6.9: Volume Snapshot Classes
6.10: CSI Volume Cloning
6.11: Storage Capacity
6.12: Node-specific Volume Limits
6.13: Volume Health Monitoring
6.14: Windows Storage
7: Configuration
7.1: Configuration Best Practices
7.2: ConfigMaps
7.3: Secrets
7.4: Resource Management for Pods and Containers
7.5: Organizing Cluster Access Using kubeconfig Files
7.6: Resource Management for Windows nodes
8: Security
8.1: Cloud Native Security and Kubernetes
8.2: Pod Security Standards
8.3: Pod Security Admission
8.4: Service Accounts
8.5: Pod Security Policies
8.6: Security For Windows Nodes
8.7: Controlling Access to the Kubernetes API
8.8: Role Based Access Control Good Practices
8.9: Good practices for Kubernetes Secrets
8.10: Multi-tenancy
8.11: Hardening Guide - Authentication Mechanisms
8.12: Kubernetes API Server Bypass Risks
8.13: Linux kernel security constraints for Pods and containers
8.14: Security Checklist
9: Policies
9.1: Limit Ranges
9.2: Resource Quotas
9.3: Process ID Limits And Reservations
9.4: Node Resource Managers
10: Scheduling, Preemption and Eviction
10.1: Kubernetes Scheduler
https://kubernetes.io/docs/concepts/_print/ 2/609
7/10/24, 9:28 AM Concepts | Kubernetes

10.2: Assigning Pods to Nodes


10.3: Pod Overhead
10.4: Pod Scheduling Readiness
10.5: Pod Topology Spread Constraints
10.6: Taints and Tolerations
10.7: Scheduling Framework
10.8: Dynamic Resource Allocation
10.9: Scheduler Performance Tuning
10.10: Resource Bin Packing
10.11: Pod Priority and Preemption
10.12: Node-pressure Eviction
10.13: API-initiated Eviction
11: Cluster Administration
11.1: Node Shutdowns
11.2: Certificates
11.3: Cluster Networking
11.4: Logging Architecture
11.5: Metrics For Kubernetes System Components
11.6: Metrics for Kubernetes Object States
11.7: System Logs
11.8: Traces For Kubernetes System Components
11.9: Proxies in Kubernetes
11.10: API Priority and Fairness
11.11: Cluster Autoscaling
11.12: Installing Addons
12: Windows in Kubernetes
12.1: Windows containers in Kubernetes
12.2: Guide for Running Windows Containers in Kubernetes
13: Extending Kubernetes
13.1: Compute, Storage, and Networking Extensions
13.1.1: Network Plugins
13.1.2: Device Plugins
13.2: Extending the Kubernetes API
13.2.1: Custom Resources
13.2.2: Kubernetes API Aggregation Layer
13.3: Operator pattern

The Concepts section helps you learn about the parts of the Kubernetes system and the abstractions Kubernetes uses to represent
your cluster, and helps you obtain a deeper understanding of how Kubernetes works.

1 - Overview
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and
services, that facilitates both declarative configuration and automation. It has a large, rapidly growing
ecosystem. Kubernetes services, support, and tools are widely available.

This page is an overview of Kubernetes.

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both
declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are
widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight
letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of
Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.

https://kubernetes.io/docs/concepts/_print/ 3/609
7/10/24, 9:28 AM Concepts | Kubernetes

Going back in time


Let's take a look at why Kubernetes is so useful by going back in time.

Traditional deployment era: Early on, organizations ran applications on physical servers. There was no way to define resource
boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run
on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other
applications would underperform. A solution for this would be to run each application on a different physical server. But this did not
scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.

Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a
single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the
information of one application cannot be freely accessed by another application.

Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be
added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as
a cluster of disposable virtual machines.

Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.

Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating
System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own
filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are
portable across clouds and OS distributions.

Containers have become popular because they provide extra benefits, such as:

Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image
use.
Continuous development, integration, and deployment: provides for reliable and frequent container image build and
deployment with quick and efficient rollbacks (due to image immutability).
Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time,
thereby decoupling applications from infrastructure.
Observability: not only surfaces OS-level information and metrics, but also application health and other signals.
Environmental consistency across development, testing, and production: runs the same on a laptop as it does in the cloud.
Cloud and OS distribution portability: runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
Application-centric management: raises the level of abstraction from running an OS on virtual hardware to running an
application on an OS using logical resources.
Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can
be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine.
Resource isolation: predictable application performance.
Resource utilization: high efficiency and density.

https://kubernetes.io/docs/concepts/_print/ 4/609
7/10/24, 9:28 AM Concepts | Kubernetes

Why you need Kubernetes and what it can do


Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers
that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to
start. Wouldn't it be easier if this behavior was handled by a system?

That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It
takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily
manage a canary deployment for your system.

Kubernetes provides you with:

Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address.
If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is
stable.
Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages,
public cloud providers, and more.
Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it
can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new
containers for your deployment, remove existing containers and adopt all their resources to the new container.
Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell
Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make
the best use of your resources.
Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-
defined health check, and doesn't advertise them to clients until they are ready to serve.
Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords,
OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your
container images, and without exposing secrets in your stack configuration.
Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if
desired.
Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically based on CPU
usage.
IPv4/IPv6 dual-stack Allocation of IPv4 and IPv6 addresses to Pods and Services
Designed for extensibility Add features to your Kubernetes cluster without changing upstream source code.

What Kubernetes is not


Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level
rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment,
scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions. However, Kubernetes is not
monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer
platforms, but preserves user choice and flexibility where it is important.

Kubernetes:

Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads,
including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on
Kubernetes.
Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD)
workflows are determined by organization cultures and preferences as well as technical requirements.
Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks
(for example, Spark), databases (for example, MySQL), caches, nor cluster storage systems (for example, Ceph) as built-in
services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through
portable mechanisms, such as the Open Service Broker.
Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms
to collect and export metrics.
Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may
be targeted by arbitrary forms of declarative specifications.
Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.

https://kubernetes.io/docs/concepts/_print/ 5/609
7/10/24, 9:28 AM Concepts | Kubernetes

Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical
definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set
of independent, composable control processes that continuously drive the current state towards the provided desired state. It
shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use
and more powerful, robust, resilient, and extensible.

What's next
Take a look at the Kubernetes Components
Take a look at the The Kubernetes API
Take a look at the Cluster Architecture
Ready to Get Started?

https://kubernetes.io/docs/concepts/_print/ 6/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1 - Objects In Kubernetes


Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to
represent the state of your cluster. Learn about the Kubernetes object model and how to work with these
objects.

This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in .yaml format.

Understanding Kubernetes objects


Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your
cluster. Specifically, they can describe:

What containerized applications are running (and on which nodes)


The resources available to those applications
The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance

A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that
the object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to
look like; this is your cluster's desired state.

To work with Kubernetes objects—whether to create, modify, or delete them—you'll need to use the Kubernetes API. When you use
the kubectl command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the
Kubernetes API directly in your own programs using one of the Client Libraries.

Object spec and status


Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object spec and the
object status . For objects that have a spec , you have to set this when you create the object, providing a description of the
characteristics you want the resource to have: its desired state.

The status describes the current state of the object, supplied and updated by the Kubernetes system and its components. The
Kubernetes control plane continually and actively manages every object's actual state to match the desired state you supplied.

For example: in Kubernetes, a Deployment is an object that can represent an application running on your cluster. When you create
the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The
Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match
your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec
and status by making a correction--in this case, starting a replacement instance.

For more information on the object spec, status, and metadata, see the Kubernetes API Conventions.

Describing a Kubernetes object


When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic
information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via kubectl ),
that API request must include that information as JSON in the request body. Most often, you provide the information to kubectl in a
file known as a manifest. By convention, manifests are YAML (you could also use JSON format). Tools such as kubectl convert the
information from a manifest into JSON or another supported serialization format when making the API request over HTTP.

Here's an example manifest that shows the required fields and object spec for a Kubernetes Deployment:

application/deployment.yaml

https://kubernetes.io/docs/concepts/_print/ 7/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl
command-line interface, passing the .yaml file as an argument. Here's an example:

kubectl apply -f https://k8s.io/examples/application/deployment.yaml

The output is similar to this:

deployment.apps/nginx-deployment created

Required fields
In the manifest (YAML or JSON file) for the Kubernetes object you want to create, you'll need to set values for the following fields:

apiVersion - Which version of the Kubernetes API you're using to create this object
kind - What kind of object you want to create

metadata - Data that helps uniquely identify the object, including a name string, UID , and optional namespace

spec - What state you desire for the object

The precise format of the object spec is different for every Kubernetes object, and contains nested fields specific to that object. The
Kubernetes API Reference can help you find the spec format for all of the objects you can create using Kubernetes.

For example, see the spec field for the Pod API reference. For each Pod, the .spec field specifies the pod and its desired state (such
as the container image name for each container within that pod). Another example of an object specification is the spec field for the
StatefulSet API. For StatefulSet, the .spec field specifies the StatefulSet and its desired state. Within the .spec of a StatefulSet is a
template for Pod objects. That template describes Pods that the StatefulSet controller will create in order to satisfy the StatefulSet
specification. Different kinds of objects can also have different .status ; again, the API reference pages detail the structure of that
.status field, and its content for each different type of object.

Note:
See Configuration Best Practices for additional information on writing YAML configuration files.

Server side field validation


Starting with Kubernetes v1.25, the API server offers server side field validation that detects unrecognized or duplicate fields in an
object. It provides all the functionality of kubectl --validate on the server side.

https://kubernetes.io/docs/concepts/_print/ 8/609
7/10/24, 9:28 AM Concepts | Kubernetes

The kubectl tool uses the --validate flag to set the level of field validation. It accepts the values ignore , warn , and strict while
also accepting the values true (equivalent to strict ) and false (equivalent to ignore ). The default validation setting for kubectl
is --validate=true .

Strict

Strict field validation, errors on validation failure

Warn

Field validation is performed, but errors are exposed as warnings rather than failing the request

Ignore

No server side field validation is performed

When kubectl cannot connect to an API server that supports field validation it will fall back to using client-side validation.
Kubernetes 1.27 and later versions always offer field validation; older Kubernetes releases might not. If your cluster is older than
v1.27, check the documentation for your version of Kubernetes.

What's next
If you're new to Kubernetes, read more about the following:

Pods which are the most important basic Kubernetes objects.


Deployment objects.
Controllers in Kubernetes.
kubectl and kubectl commands.

Kubernetes Object Management explains how to use kubectl to manage objects. You might need to install kubectl if you don't
already have it available.

To learn about the Kubernetes API in general, visit:

Kubernetes API overview

To learn about objects in Kubernetes in more depth, read other pages in this section:

https://kubernetes.io/docs/concepts/_print/ 9/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.1 - Kubernetes Object Management


The kubectl command-line tool supports several different ways to create and manage Kubernetes objects. This document provides
an overview of the different approaches. Read the Kubectl book for details of managing objects by Kubectl.

Management techniques
Warning:
A Kubernetes object should be managed using only one technique. Mixing and matching techniques for the same object results
in undefined behavior.

Management technique Operates on Recommended environment Supported writers Learning curve

Imperative commands Live objects Development projects 1+ Lowest

Imperative object configuration Individual files Production projects 1 Moderate

Declarative object configuration Directories of files Production projects 1+ Highest

Imperative commands
When using imperative commands, a user operates directly on live objects in a cluster. The user provides operations to the kubectl
command as arguments or flags.

This is the recommended way to get started or to run a one-off task in a cluster. Because this technique operates directly on live
objects, it provides no history of previous configurations.

Examples
Run an instance of the nginx container by creating a Deployment object:

kubectl create deployment nginx --image nginx

Trade-offs
Advantages compared to object configuration:

Commands are expressed as a single action word.


Commands require only a single step to make changes to the cluster.

Disadvantages compared to object configuration:

Commands do not integrate with change review processes.


Commands do not provide an audit trail associated with changes.
Commands do not provide a source of records except for what is live.
Commands do not provide a template for creating new objects.

Imperative object configuration


In imperative object configuration, the kubectl command specifies the operation (create, replace, etc.), optional flags and at least one
file name. The file specified must contain a full definition of the object in YAML or JSON format.

See the API reference for more details on object definitions.

https://kubernetes.io/docs/concepts/_print/ 10/609
7/10/24, 9:28 AM Concepts | Kubernetes

Warning:
The imperative replace command replaces the existing spec with the newly provided one, dropping all changes to the object
missing from the configuration file. This approach should not be used with resource types whose specs are updated
independently of the configuration file. Services of type LoadBalancer, for example, have their externalIPs field updated
independently from the configuration by the cluster.

Examples
Create the objects defined in a configuration file:

kubectl create -f nginx.yaml

Delete the objects defined in two configuration files:

kubectl delete -f nginx.yaml -f redis.yaml

Update the objects defined in a configuration file by overwriting the live configuration:

kubectl replace -f nginx.yaml

Trade-offs
Advantages compared to imperative commands:

Object configuration can be stored in a source control system such as Git.


Object configuration can integrate with processes such as reviewing changes before push and audit trails.
Object configuration provides a template for creating new objects.

Disadvantages compared to imperative commands:

Object configuration requires basic understanding of the object schema.


Object configuration requires the additional step of writing a YAML file.

Advantages compared to declarative object configuration:

Imperative object configuration behavior is simpler and easier to understand.


As of Kubernetes version 1.5, imperative object configuration is more mature.

Disadvantages compared to declarative object configuration:

Imperative object configuration works best on files, not directories.


Updates to live objects must be reflected in configuration files, or they will be lost during the next replacement.

Declarative object configuration


When using declarative object configuration, a user operates on object configuration files stored locally, however the user does not
define the operations to be taken on the files. Create, update, and delete operations are automatically detected per-object by
kubectl . This enables working on directories, where different operations might be needed for different objects.

Note:
Declarative object configuration retains changes made by other writers, even if the changes are not merged back to the object
configuration file. This is possible by using the patch API operation to write only observed differences, instead of using the
replace API operation to replace the entire object configuration.

https://kubernetes.io/docs/concepts/_print/ 11/609
7/10/24, 9:28 AM Concepts | Kubernetes

Examples
Process all object configuration files in the configs directory, and create or patch the live objects. You can first diff to see what
changes are going to be made, and then apply:

kubectl diff -f configs/


kubectl apply -f configs/

Recursively process directories:

kubectl diff -R -f configs/


kubectl apply -R -f configs/

Trade-offs
Advantages compared to imperative object configuration:

Changes made directly to live objects are retained, even if they are not merged back into the configuration files.
Declarative object configuration has better support for operating on directories and automatically detecting operation types
(create, patch, delete) per-object.

Disadvantages compared to imperative object configuration:

Declarative object configuration is harder to debug and understand results when they are unexpected.
Partial updates using diffs create complex merge and patch operations.

What's next
Managing Kubernetes Objects Using Imperative Commands
Imperative Management of Kubernetes Objects Using Configuration Files
Declarative Management of Kubernetes Objects Using Configuration Files
Declarative Management of Kubernetes Objects Using Kustomize
Kubectl Command Reference
Kubectl Book
Kubernetes API Reference

https://kubernetes.io/docs/concepts/_print/ 12/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.2 - Object Names and IDs


Each object in your cluster has a Name that is unique for that type of resource. Every Kubernetes object also has a UID that is unique
across your whole cluster.

For example, you can only have one Pod named myapp-1234 within the same namespace, but you can have one Pod and one
Deployment that are each named myapp-1234 .

For non-unique user-provided attributes, Kubernetes provides labels and annotations.

Names
A client-provided string that refers to an object in a resource URL, such as /api/v1/pods/some-name .

Only one object of a given kind can have a given name at a time. However, if you delete the object, you can make a new object with
the same name.

Names must be unique across all API versions of the same resource. API resources are distinguished by their API group,
resource type, namespace (for namespaced resources), and name. In other words, API version is irrelevant in this context.

Note:
In cases when objects represent a physical entity, like a Node representing a physical host, when the host is re-created under
the same name without deleting and re-creating the Node, Kubernetes treats the new host as the old one, which may lead to
inconsistencies.

Below are four types of commonly used name constraints for resources.

DNS Subdomain Names


Most resource types require a name that can be used as a DNS subdomain name as defined in RFC 1123. This means the name
must:

contain no more than 253 characters


contain only lowercase alphanumeric characters, '-' or '.'
start with an alphanumeric character
end with an alphanumeric character

RFC 1123 Label Names


Some resource types require their names to follow the DNS label standard as defined in RFC 1123. This means the name must:

contain at most 63 characters


contain only lowercase alphanumeric characters or '-'
start with an alphanumeric character
end with an alphanumeric character

RFC 1035 Label Names


Some resource types require their names to follow the DNS label standard as defined in RFC 1035. This means the name must:

contain at most 63 characters


contain only lowercase alphanumeric characters or '-'
start with an alphabetic character
end with an alphanumeric character

Note:
The only difference between the RFC 1035 and RFC 1123 label standards is that RFC 1123 labels are allowed to start with a digit,
whereas RFC 1035 labels can start with a lowercase alphabetic character only.

https://kubernetes.io/docs/concepts/_print/ 13/609
7/10/24, 9:28 AM Concepts | Kubernetes

Path Segment Names


Some resource types require their names to be able to be safely encoded as a path segment. In other words, the name may not be
"." or ".." and the name may not contain "/" or "%".

Here's an example manifest for a Pod named nginx-demo .

apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Note:
Some resource types have additional restrictions on their names.

UIDs
A Kubernetes systems-generated string to uniquely identify objects.

Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID. It is intended to distinguish between
historical occurrences of similar entities.

Kubernetes UIDs are universally unique identifiers (also known as UUIDs). UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T
X.667.

What's next
Read about labels and annotations in Kubernetes.
See the Identifiers and Names in Kubernetes design document.

https://kubernetes.io/docs/concepts/_print/ 14/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.3 - Labels and Selectors


Labels are key/value pairs that are attached to objects such as Pods. Labels are intended to be used to specify identifying attributes
of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to
organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified
at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.

"metadata": {
"labels": {
"key1" : "value1",
"key2" : "value2"
}
}

Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded
using annotations.

Motivation
Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring
clients to store these mappings.

Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments,
multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which
breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather
than by users.

Example labels:

"release" : "stable" , "release" : "canary"

"environment" : "dev" , "environment" : "qa" , "environment" : "production"

"tier" : "frontend" , "tier" : "backend" , "tier" : "cache"

"partition" : "customerA" , "partition" : "customerB"

"track" : "daily" , "track" : "weekly"

These are examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be
unique for a given object.

Syntax and character set


Labels are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash ( / ). The name
segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ( [a-z0-9A-Z] ) with
dashes ( - ), underscores ( _ ), dots ( . ), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS
subdomain: a series of DNS labels separated by dots ( . ), not longer than 253 characters in total, followed by a slash ( / ).

If the prefix is omitted, the label Key is presumed to be private to the user. Automated system components (e.g. kube-scheduler ,
kube-controller-manager , kube-apiserver , kubectl , or other third-party automation) which add labels to end-user objects must
specify a prefix.

The kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.

Valid label value:

must be 63 characters or less (can be empty),


unless empty, must begin and end with an alphanumeric character ( [a-z0-9A-Z] ),
could contain dashes ( - ), underscores ( _ ), dots ( . ), and alphanumerics between.

For example, here's a manifest for a Pod that has two labels environment: production and app: nginx :

https://kubernetes.io/docs/concepts/_print/ 15/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Pod
metadata:
name: label-demo
labels:
environment: production
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Label selectors
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).

Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.

The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements
which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical
AND ( && ) operator.

The semantics of empty or non-specified selectors are dependent on the context, and API types that use selectors should document
the validity and meaning of them.

Note:
For some API types, such as ReplicaSets, the label selectors of two instances must not overlap within a namespace, or the
controller can see that as conflicting instructions and fail to determine how many replicas should be present.

Caution:
For both equality-based and set-based conditions there is no logical OR (||) operator. Ensure your filter statements are
structured accordingly.

Equality-based requirement
Equality- or inequality-based requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified
label constraints, though they may have additional labels as well. Three kinds of operators are admitted = , == , != . The first two
represent equality (and are synonyms), while the latter represents inequality. For example:

environment = production
tier != frontend

The former selects all resources with key equal to environment and value equal to production . The latter selects all resources with
key equal to tier and value distinct from frontend , and all resources with no labels with the tier key. One could filter for
resources in production excluding frontend using the comma operator: environment=production,tier!=frontend

One usage scenario for equality-based label requirement is for Pods to specify node selection criteria. For example, the sample Pod
below selects nodes with the label " accelerator=nvidia-tesla-p100 ".

https://kubernetes.io/docs/concepts/_print/ 16/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Pod
metadata:
name: cuda-test
spec:
containers:
- name: cuda-test
image: "registry.k8s.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
nodeSelector:
accelerator: nvidia-tesla-p100

Set-based requirement
Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in , notin and
exists (only the key identifier). For example:

environment in (production, qa)


tier notin (frontend, backend)
partition
!partition

The first example selects all resources with key equal to environment and value equal to production or qa .
The second example selects all resources with key equal to tier and values other than frontend and backend , and all
resources with no labels with the tier key.
The third example selects all resources including a label with key partition ; no values are checked.
The fourth example selects all resources without a label with key partition ; no values are checked.

Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with
environment different than qa can be achieved using partition,environment notin (qa) . The set-based label selector is a general
form of equality since environment=production is equivalent to environment in (production) ; similarly for != and notin .

Set-based requirements can be mixed with equality-based requirements. For example: partition in (customerA,
customerB),environment!=qa .

API
LIST and WATCH filtering
LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both
requirements are permitted (presented here as they would appear in a URL query string):

equality-based requirements: ?labelSelector=environment%3Dproduction,tier%3Dfrontend


set-based requirements: ?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29

Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and
using equality-based one may write:

kubectl get pods -l environment=production,tier=frontend

or using set-based requirements:

kubectl get pods -l 'environment in (production),tier in (frontend)'

As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:
https://kubernetes.io/docs/concepts/_print/ 17/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl get pods -l 'environment in (production, qa)'

or restricting negative matching via notin operator:

kubectl get pods -l 'environment,environment notin (frontend)'

Set references in API objects


Some Kubernetes objects, such as services and replicationcontrollers , also use label selectors to specify sets of other resources,
such as pods.

Service and ReplicationController


The set of pods that a service targets is defined with a label selector. Similarly, the population of pods that a replicationcontroller
should manage is also defined with a label selector.

Label selectors for both objects are defined in json or yaml files using maps, and only equality-based requirement selectors are
supported:

"selector": {
"component" : "redis",
}

or

selector:
component: redis

This selector (respectively in json or yaml format) is equivalent to component=redis or component in (redis) .

Resources that support set-based requirements


Newer resources, such as Job , Deployment , ReplicaSet , and DaemonSet , support set-based requirements as well.

selector:
matchLabels:
component: redis
matchExpressions:
- { key: tier, operator: In, values: [cache] }
- { key: environment, operator: NotIn, values: [dev] }

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of
matchExpressions , whose key field is "key", the operator is "In", and the values array contains only "value". matchExpressions is a
list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in
the case of In and NotIn. All of the requirements, from both matchLabels and matchExpressions are ANDed together -- they must all
be satisfied in order to match.

Selecting sets of nodes


One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. See the documentation on
node selection for more information.

https://kubernetes.io/docs/concepts/_print/ 18/609
7/10/24, 9:28 AM Concepts | Kubernetes

Using labels effectively


You can apply a single label to any resources, but this is not always the best practice. There are many scenarios where multiple
labels should be used to distinguish resource sets from one another.

For instance, different applications would use different values for the app label, but a multi-tier application, such as the guestbook
example, would additionally need to distinguish each tier. The frontend could carry the following labels:

labels:
app: guestbook
tier: frontend

while the Redis master and replica would have different tier labels, and perhaps even an additional role label:

labels:
app: guestbook
tier: backend
role: master

and

labels:
app: guestbook
tier: backend
role: replica

The labels allow for slicing and dicing the resources along any dimension specified by a label:

kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml


kubectl get pods -Lapp -Ltier -Lrole

NAME READY STATUS RESTARTS AGE APP TIER ROLE


guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
guestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica
guestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>

kubectl get pods -lapp=guestbook,role=replica

NAME READY STATUS RESTARTS AGE


guestbook-redis-replica-2q2yf 1/1 Running 0 3m
guestbook-redis-replica-qgazl 1/1 Running 0 3m

https://kubernetes.io/docs/concepts/_print/ 19/609
7/10/24, 9:28 AM Concepts | Kubernetes

Updating labels
Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl
label . For example, if you want to label all your NGINX Pods as frontend tier, run:

kubectl label pods -l app=nginx tier=fe

pod/my-nginx-2035384211-j5fhi labeled
pod/my-nginx-2035384211-u2c7e labeled
pod/my-nginx-2035384211-u3t6x labeled

This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". To see the pods you labeled, run:

kubectl get pods -l app=nginx -L tier

NAME READY STATUS RESTARTS AGE TIER


my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe
my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe
my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe

This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with -L or --label-columns ).

For more information, please see kubectl label.

What's next
Learn how to add a label to a node
Find Well-known labels, Annotations and Taints
See Recommended labels
Enforce Pod Security Standards with Namespace Labels
Read a blog on Writing a Controller for Pod Labels

https://kubernetes.io/docs/concepts/_print/ 20/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.4 - Namespaces
In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need
to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects
(e.g. Deployments, Services, etc.) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc.).

When to Use Multiple Namespaces


Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a
few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the
features they provide.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.
Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.

Namespaces are a way to divide cluster resources between multiple users (via resource quota).

It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same
software: use labels to distinguish resources within the same namespace.

Note:
For a production cluster, consider not using the default namespace. Instead, make other namespaces and use those.

Initial namespaces
Kubernetes starts with four initial namespaces:

default

Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace.

kube-node-lease

This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the
control plane can detect node failure.

kube-public

This namespace is readable by all clients (including those not authenticated). This namespace is mostly reserved for cluster
usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this
namespace is only a convention, not a requirement.

kube-system

The namespace for objects created by the Kubernetes system.

Working with Namespaces


Creation and deletion of namespaces are described in the Admin Guide documentation for namespaces.

Note:
Avoid creating namespaces with the prefix `kube-`, since it is reserved for Kubernetes system namespaces.

Viewing namespaces
You can list the current namespaces in a cluster using:

https://kubernetes.io/docs/concepts/_print/ 21/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl get namespace

NAME STATUS AGE


default Active 1d
kube-node-lease Active 1d
kube-public Active 1d
kube-system Active 1d

Setting the namespace for a request


To set the namespace for a current request, use the --namespace flag.

For example:

kubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>


kubectl get pods --namespace=<insert-namespace-name-here>

Setting the namespace preference


You can permanently save the namespace for all subsequent kubectl commands in that context.

kubectl config set-context --current --namespace=<insert-namespace-name-here>


# Validate it
kubectl config view --minify | grep namespace:

Namespaces and DNS


When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-
name>.svc.cluster.local , which means that if a container only uses <service-name> , it will resolve to the service which is local to a
namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and
Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).

As a result, all namespace names must be valid RFC 1123 DNS labels.

Warning:
By creating namespaces with the same name as public top-level domains, Services in these namespaces can have short DNS
names that overlap with public DNS records. Workloads from any namespace performing a DNS lookup without a trailing dot
will be redirected to those services, taking precedence over public DNS.

To mitigate this, limit privileges for creating namespaces to trusted users. If required, you could additionally configure third-
party security controls, such as admission webhooks, to block creating any namespace with the name of public TLDs.

Not all objects are in a namespace


Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace
resources are not themselves in a namespace. And low-level resources, such as nodes and persistentVolumes, are not in any
namespace.

To see which Kubernetes resources are and aren't in a namespace:

https://kubernetes.io/docs/concepts/_print/ 22/609
7/10/24, 9:28 AM Concepts | Kubernetes

# In a namespace
kubectl api-resources --namespaced=true

# Not in a namespace
kubectl api-resources --namespaced=false

Automatic labelling
ⓘ FEATURE STATE: Kubernetes 1.22 [stable]

The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the
namespace name.

What's next
Learn more about creating a new namespace.
Learn more about deleting a namespace.

https://kubernetes.io/docs/concepts/_print/ 23/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.5 - Annotations
You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can
retrieve this metadata.

Attaching metadata to objects


You can use either labels or annotations to attach metadata to Kubernetes objects. Labels can be used to select objects and to find
collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The
metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels. It is
possible to use labels as well as annotations in the metadata of the same object.

Annotations, like labels, are key/value maps:

"metadata": {
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
}

Note:
The keys and the values in the map must be strings. In other words, you cannot use numeric, boolean, list or other types for
either the keys or the values.

Here are some examples of information that could be recorded in annotations:

Fields managed by a declarative configuration layer. Attaching these fields as annotations distinguishes them from default
values set by clients or servers, and from auto-generated fields and fields set by auto-sizing or auto-scaling systems.

Build, release, or image information like timestamps, release IDs, git branch, PR numbers, image hashes, and registry address.

Pointers to logging, monitoring, analytics, or audit repositories.

Client library or tool information that can be used for debugging purposes: for example, name, version, and build information.

User or tool/system provenance information, such as URLs of related objects from other ecosystem components.

Lightweight rollout tool metadata: for example, config or checkpoints.

Phone or pager numbers of persons responsible, or directory entries that specify where that information can be found, such as
a team web site.

Directives from the end-user to the implementations to modify behavior or engage non-standard features.

Instead of using annotations, you could store this type of information in an external database or directory, but that would make it
much harder to produce shared client libraries and tools for deployment, management, introspection, and the like.

Syntax and character set


Annotations are key/value pairs. Valid annotation keys have two segments: an optional prefix and name, separated by a slash ( / ).
The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ( [a-z0-9A-
Z] ) with dashes ( - ), underscores ( _ ), dots ( . ), and alphanumerics between. The prefix is optional. If specified, the prefix must be
a DNS subdomain: a series of DNS labels separated by dots ( . ), not longer than 253 characters in total, followed by a slash ( / ).

If the prefix is omitted, the annotation Key is presumed to be private to the user. Automated system components (e.g. kube-
scheduler , kube-controller-manager , kube-apiserver , kubectl , or other third-party automation) which add annotations to end-user
objects must specify a prefix.

The kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.

https://kubernetes.io/docs/concepts/_print/ 24/609
7/10/24, 9:28 AM Concepts | Kubernetes

For example, here's a manifest for a Pod that has the annotation imageregistry: https://hub.docker.com/ :

apiVersion: v1
kind: Pod
metadata:
name: annotations-demo
annotations:
imageregistry: "https://hub.docker.com/"
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

What's next
Learn more about Labels and Selectors.
Find Well-known labels, Annotations and Taints

https://kubernetes.io/docs/concepts/_print/ 25/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.6 - Field Selectors


Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field
selector queries:

metadata.name=my-service

metadata.namespace!=default

status.phase=Pending

This kubectl command selects all Pods for which the value of the status.phase field is Running :

kubectl get pods --field-selector status.phase=Running

Note:
Field selectors are essentially resource filters. By default, no selectors/filters are applied, meaning that all resources of the
specified type are selected. This makes the kubectl queries kubectl get pods and kubectl get pods --field-selector ""
equivalent.

Supported fields
Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace
fields. Using unsupported field selectors produces an error. For example:

kubectl get ingress --field-selector foo.bar=baz

Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.ba

List of supported fields


Kind Fields

Pod spec.nodeName
spec.restartPolicy
spec.schedulerName
spec.serviceAccountName
spec.hostNetwork
status.phase
status.podIP
status.nominatedNodeName

Event involvedObject.kind
involvedObject.namespace
involvedObject.name
involvedObject.uid
involvedObject.apiVersion
involvedObject.resourceVersion
involvedObject.fieldPath
reason
reportingComponent
source
type

https://kubernetes.io/docs/concepts/_print/ 26/609
7/10/24, 9:28 AM Concepts | Kubernetes

Kind Fields

Secret type

Namespace status.phase

ReplicaSet status.replicas

ReplicationController status.replicas

Job status.successful

Node spec.unschedulable

CertificateSigningRequest spec.signerName

Supported operators
You can use the = , == , and != operators with field selectors ( = and == mean the same thing). This kubectl command, for
example, selects all Kubernetes Services that aren't in the default namespace:

kubectl get services --all-namespaces --field-selector metadata.namespace!=default

Note:
Set-based operators (in, notin, exists) are not supported for field selectors.

Chained selectors
As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects
all Pods for which the status.phase does not equal Running and the spec.restartPolicy field equals Always :

kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always

Multiple resource types


You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not
in the default namespace:

kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default

https://kubernetes.io/docs/concepts/_print/ 27/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.7 - Finalizers
Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked
for deletion. Finalizers alert controllers to clean up resources the deleted object owned.

When you tell Kubernetes to delete an object that has finalizers specified for it, the Kubernetes API marks the object for deletion by
populating .metadata.deletionTimestamp , and returns a 202 status code (HTTP "Accepted"). The target object remains in a
terminating state while the control plane, or other components, take the actions defined by the finalizers. After these actions are
complete, the controller removes the relevant finalizers from the target object. When the metadata.finalizers field is empty,
Kubernetes considers the deletion complete and deletes the object.

You can use finalizers to control garbage collection of resources. For example, you can define a finalizer to clean up related
resources or infrastructure before the controller deletes the target resource.

You can use finalizers to control garbage collection of objects by alerting controllers to perform specific cleanup tasks before
deleting the target resource.

Finalizers don't usually specify the code to execute. Instead, they are typically lists of keys on a specific resource similar to
annotations. Kubernetes specifies some finalizers automatically, but you can also specify your own.

How finalizers work


When you create a resource using a manifest file, you can specify finalizers in the metadata.finalizers field. When you attempt to
delete the resource, the API server handling the delete request notices the values in the finalizers field and does the following:

Modifies the object to add a metadata.deletionTimestamp field with the time you started the deletion.
Prevents the object from being removed until all items are removed from its metadata.finalizers field
Returns a 202 status code (HTTP "Accepted")

The controller managing that finalizer notices the update to the object setting the metadata.deletionTimestamp , indicating deletion of
the object has been requested. The controller then attempts to satisfy the requirements of the finalizers specified for that resource.
Each time a finalizer condition is satisfied, the controller removes that key from the resource's finalizers field. When the
finalizers field is emptied, an object with a deletionTimestamp field set is automatically deleted. You can also use finalizers to
prevent deletion of unmanaged resources.

A common example of a finalizer is kubernetes.io/pv-protection , which prevents accidental deletion of PersistentVolume objects.
When a PersistentVolume object is in use by a Pod, Kubernetes adds the pv-protection finalizer. If you try to delete the
PersistentVolume , it enters a Terminating status, but the controller can't delete it because the finalizer exists. When the Pod stops
using the PersistentVolume , Kubernetes clears the pv-protection finalizer, and the controller deletes the volume.

Note:
When you DELETE an object, Kubernetes adds the deletion timestamp for that object and then immediately starts to
restrict changes to the .metadata.finalizers field for the object that is now pending deletion. You can remove existing
finalizers (deleting an entry from the finalizers list) but you cannot add a new finalizer. You also cannot modify the
deletionTimestamp for an object once it is set.

After the deletion is requested, you can not resurrect this object. The only way is to delete it and make a new similar object.

Owner references, labels, and finalizers


Like labels, owner references describe the relationships between objects in Kubernetes, but are used for a different purpose. When
a controller manages objects like Pods, it uses labels to track changes to groups of related objects. For example, when a Job creates
one or more Pods, the Job controller applies labels to those pods and tracks changes to any Pods in the cluster with the same label.

The Job controller also adds owner references to those Pods, pointing at the Job that created the Pods. If you delete the Job while
these Pods are running, Kubernetes uses the owner references (not labels) to determine which Pods in the cluster need cleanup.

Kubernetes also processes finalizers when it identifies owner references on a resource targeted for deletion.

https://kubernetes.io/docs/concepts/_print/ 28/609
7/10/24, 9:28 AM Concepts | Kubernetes

In some situations, finalizers can block the deletion of dependent objects, which can cause the targeted owner object to remain for
longer than expected without being fully deleted. In these situations, you should check finalizers and owner references on the target
owner and dependent objects to troubleshoot the cause.

Note:
In cases where objects are stuck in a deleting state, avoid manually removing finalizers to allow deletion to continue. Finalizers
are usually added to resources for a reason, so forcefully removing them can lead to issues in your cluster. This should only be
done when the purpose of the finalizer is understood and is accomplished in another way (for example, manually cleaning up
some dependent object).

What's next
Read Using Finalizers to Control Deletion on the Kubernetes blog.

https://kubernetes.io/docs/concepts/_print/ 29/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.8 - Owners and Dependents


In Kubernetes, some objects are owners of other objects. For example, a ReplicaSet is the owner of a set of Pods. These owned
objects are dependents of their owner.

Ownership is different from the labels and selectors mechanism that some resources also use. For example, consider a Service that
creates EndpointSlice objects. The Service uses labels to allow the control plane to determine which EndpointSlice objects are used
for that Service. In addition to the labels, each EndpointSlice that is managed on behalf of a Service has an owner reference. Owner
references help different parts of Kubernetes avoid interfering with objects they don’t control.

Owner references in object specifications


Dependent objects have a metadata.ownerReferences field that references their owner object. A valid owner reference consists of the
object name and a UID within the same namespace as the dependent object. Kubernetes sets the value of this field automatically for
objects that are dependents of other objects like ReplicaSets, DaemonSets, Deployments, Jobs and CronJobs, and
ReplicationControllers. You can also configure these relationships manually by changing the value of this field. However, you usually
don't need to and can allow Kubernetes to automatically manage the relationships.

Dependent objects also have an ownerReferences.blockOwnerDeletion field that takes a boolean value and controls whether specific
dependents can block garbage collection from deleting their owner object. Kubernetes automatically sets this field to true if a
controller (for example, the Deployment controller) sets the value of the metadata.ownerReferences field. You can also set the value of
the blockOwnerDeletion field manually to control which dependents block garbage collection.

A Kubernetes admission controller controls user access to change this field for dependent resources, based on the delete
permissions of the owner. This control prevents unauthorized users from delaying owner object deletion.

Note:
Cross-namespace owner references are disallowed by design. Namespaced dependents can specify cluster-scoped or
namespaced owners. A namespaced owner must exist in the same namespace as the dependent. If it does not, the owner
reference is treated as absent, and the dependent is subject to deletion once all owners are verified absent.

Cluster-scoped dependents can only specify cluster-scoped owners. In v1.20+, if a cluster-scoped dependent specifies a
namespaced kind as an owner, it is treated as having an unresolvable owner reference, and is not able to be garbage collected.

In v1.20+, if the garbage collector detects an invalid cross-namespace ownerReference , or a cluster-scoped dependent with an
ownerReference referencing a namespaced kind, a warning Event with a reason of OwnerRefInvalidNamespace and an
involvedObject of the invalid dependent is reported. You can check for that kind of Event by running kubectl get events -A --
field-selector=reason=OwnerRefInvalidNamespace .

Ownership and finalizers


When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the
resource. Finalizers prevent accidental deletion of resources your cluster may still need to function correctly. For example, if you try
to delete a PersistentVolume that is still in use by a Pod, the deletion does not happen immediately because the PersistentVolume
has the kubernetes.io/pv-protection finalizer on it. Instead, the volume remains in the Terminating status until Kubernetes clears
the finalizer, which only happens after the PersistentVolume is no longer bound to a Pod.

Kubernetes also adds finalizers to an owner resource when you use either foreground or orphan cascading deletion. In foreground
deletion, it adds the foreground finalizer so that the controller must delete dependent resources that also have
ownerReferences.blockOwnerDeletion=true before it deletes the owner. If you specify an orphan deletion policy, Kubernetes adds the
orphan finalizer so that the controller ignores dependent resources after it deletes the owner object.

What's next
Learn more about Kubernetes finalizers.
Learn about garbage collection.
Read the API reference for object metadata.

https://kubernetes.io/docs/concepts/_print/ 30/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.1.9 - Recommended Labels


You can visualize and manage Kubernetes objects with more tools than kubectl and the dashboard. A common set of labels allows
tools to work interoperably, describing objects in a common manner that all tools can understand.

In addition to supporting tooling, the recommended labels describe applications in a way that can be queried.

The metadata is organized around the concept of an application. Kubernetes is not a platform as a service (PaaS) and doesn't have or
enforce a formal notion of an application. Instead, applications are informal and described with metadata. The definition of what an
application contains is loose.

Note:
These are recommended labels. They make it easier to manage applications but aren't required for any core tooling.

Shared labels and annotations share a common prefix: app.kubernetes.io . Labels without a prefix are private to users. The shared
prefix ensures that shared labels do not interfere with custom user labels.

Labels
In order to take full advantage of using these labels, they should be applied on every resource object.

Key Description Example Type

app.kubernetes.io/name The name of the application mysql string

app.kubernetes.io/instance A unique name identifying the instance of an application mysql- string


abcxyz

app.kubernetes.io/version The current version of the application (e.g., a SemVer 1.0, 5.7.21 string
revision hash, etc.)

app.kubernetes.io/component The component within the architecture database string

app.kubernetes.io/part-of The name of a higher level application this one is part of wordpress string

app.kubernetes.io/managed- The tool being used to manage the operation of an application Helm string
by

To illustrate these labels in action, consider the following StatefulSet object:

# This is an excerpt
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxyz
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
app.kubernetes.io/managed-by: Helm

Applications And Instances Of Applications


An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example,
WordPress can be installed more than once where different websites are different installations of WordPress.

https://kubernetes.io/docs/concepts/_print/ 31/609
7/10/24, 9:28 AM Concepts | Kubernetes

The name of an application and the instance name are recorded separately. For example, WordPress has a app.kubernetes.io/name
of wordpress while it has an instance name, represented as app.kubernetes.io/instance with a value of wordpress-abcxyz . This
enables the application and instance of the application to be identifiable. Every instance of an application must have a unique name.

Examples
To illustrate different ways to use these labels the following examples have varying complexity.

A Simple Stateless Service


Consider the case for a simple stateless service deployed using Deployment and Service objects. The following two snippets
represent how the labels could be used in their simplest form.

The Deployment is used to oversee the pods running the application itself.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: myservice
app.kubernetes.io/instance: myservice-abcxyz
...

The Service is used to expose the application.

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: myservice
app.kubernetes.io/instance: myservice-abcxyz
...

Web Application With A Database


Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The
following snippets illustrate the start of objects used to deploy this application.

The start to the following Deployment is used for WordPress:

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxyz
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
...

The Service is used to expose WordPress:

https://kubernetes.io/docs/concepts/_print/ 32/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxyz
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
...

MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:

apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxyz
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
...

The Service is used to expose MySQL as part of WordPress:

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxyz
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
...

With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are
included.

https://kubernetes.io/docs/concepts/_print/ 33/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.2 - Kubernetes Components


A Kubernetes cluster consists of the components that are a part of the control plane and a set of machines
called nodes.

When you deploy Kubernetes, you get a cluster.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at
least one worker node.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker
nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a
cluster usually runs multiple nodes, providing fault-tolerance and high availability.

This document outlines the various components you need to have for a complete and working Kubernetes cluster.

Kubernetes cluster
API server
api

Cloud controller
c-m
c-m c-c-m manager
c-c-m
c-m c-c-m (optional) c-c-m

Controller
manager c-m

etcd
api
Node Node (persistence store) etcd
api Node
api

kubelet
kubelet

kubelet kubelet kubelet kube-proxy


etcd k-proxy

sched
sched
sched

Scheduler
sched

Control Plane k-proxy k-proxy k-proxy


Control plane

Node

The components of a Kubernetes cluster

Control Plane Components


The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and
responding to cluster events (for example, starting up a new pod when a Deployment's replicas field is unsatisfied).

Control plane components can be run on any machine in the cluster. However, for simplicity, setup scripts typically start all control
plane components on the same machine, and do not run user containers on this machine. See Creating Highly Available clusters
with kubeadm for an example control plane setup that runs across multiple machines.

kube-apiserver
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for
the Kubernetes control plane.

The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it
scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.

etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.

If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data.

You can find in-depth information about etcd in the official documentation.

https://kubernetes.io/docs/concepts/_print/ 34/609
7/10/24, 9:28 AM Concepts | Kubernetes

kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

Factors taken into account for scheduling decisions include: individual and collective resource requirements,
hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and
deadlines.

kube-controller-manager
Control plane component that runs controller processes.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a
single process.

There are many different types of controllers. Some examples of them are:

Node controller: Responsible for noticing and responding when nodes go down.
Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
ServiceAccount controller: Create default ServiceAccounts for new namespaces.

The above is not an exhaustive list.

cloud-controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components
that only interact with your cluster.
The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your
own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.

As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a
single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to
help tolerate failures.

The following controllers can have cloud provider dependencies:

Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
Route controller: For setting up routes in the underlying cloud infrastructure
Service controller: For creating, updating and deleting cloud provider load balancers

Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in
those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.

kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.

kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network
sessions inside or outside of your cluster.

kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the
traffic itself.

https://kubernetes.io/docs/concepts/_print/ 35/609
7/10/24, 9:28 AM Concepts | Kubernetes

Container runtime
A fundamental component that empowers Kubernetes to run containers effectively. It is responsible for managing the execution and
lifecycle of containers within the Kubernetes environment.

Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container
Runtime Interface).

Addons
Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features. Because these are providing
cluster-level features, namespaced resources for addons belong within the kube-system namespace.

Selected addons are described below; for an extended list of available addons, please see Addons.

DNS
While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes
services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Web UI (Dashboard)
Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications
running in the cluster, as well as the cluster itself.

Container Resource Monitoring


Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for
browsing that data.

Cluster-level Logging
A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

Network Plugins
Network plugins are software components that implement the container network interface (CNI) specification. They are responsible
for allocating IP addresses to pods and enabling them to communicate with each other within the cluster.

What's next
Learn more about the following:

Nodes and their communication with the control plane.


Kubernetes controllers.
kube-scheduler which is the default scheduler for Kubernetes.
Etcd's official documentation.
Several container runtimes in Kubernetes.
Integrating with cloud providers using cloud-controller-manager.
kubectl commands.

https://kubernetes.io/docs/concepts/_print/ 36/609
7/10/24, 9:28 AM Concepts | Kubernetes

1.3 - The Kubernetes API


The Kubernetes API lets you query and manipulate the state of objects in Kubernetes. The core of Kubernetes'
control plane is the API server and the HTTP API that it exposes. Users, the different parts of your cluster, and
external components all communicate with one another through the API server.

The core of Kubernetes' control plane is the API server. The API server exposes an HTTP API that lets end users, different parts of
your cluster, and external components communicate with one another.

The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes (for example: Pods, Namespaces,
ConfigMaps, and Events).

Most operations can be performed through the kubectl command-line interface or other command-line tools, such as kubeadm,
which in turn use the API. However, you can also access the API directly using REST calls. Kubernetes provides a set of client libraries
for those looking to write applications using the Kubernetes API.

Each Kubernetes cluster publishes the specification of the APIs that the cluster serves. There are two mechanisms that Kubernetes
uses to publish these API specifications; both are useful to enable automatic interoperability. For example, the kubectl tool fetches
and caches the API specification for enabling command-line completion and other features. The two supported mechanisms are as
follows:

The Discovery API provides information about the Kubernetes APIs: API names, resources, versions, and supported operations.
This is a Kubernetes specific term as it is a separate API from the Kubernetes OpenAPI. It is intended to be a brief summary of
the available resources and it does not detail specific schema for the resources. For reference about resource schemas, please
refer to the OpenAPI document.

The Kubernetes OpenAPI Document provides (full) OpenAPI v2.0 and 3.0 schemas for all Kubernetes API endpoints. The
OpenAPI v3 is the preferred method for accessing OpenAPI as it provides a more comprehensive and accurate view of the API.
It includes all the available API paths, as well as all resources consumed and produced for every operations on every endpoints.
It also includes any extensibility components that a cluster supports. The data is a complete specification and is significantly
larger than that from the Discovery API.

Discovery API
Kubernetes publishes a list of all group versions and resources supported via the Discovery API. This includes the following for each
resource:

Name
Cluster or namespaced scope
Endpoint URL and supported verbs
Alternative names
Group, version, kind

The API is available both aggregated and unaggregated form. The aggregated discovery serves two endpoints while the
unaggregated discovery serves a separate endpoint for each group version.

Aggregated discovery

ⓘ FEATURE STATE: Kubernetes v1.30 [stable]

Kubernetes offers stable support for aggregated discovery, publishing all resources supported by a cluster through two endpoints
( /api and /apis ). Requesting this endpoint drastically reduces the number of requests sent to fetch the discovery data from the
cluster. You can access the data by requesting the respective endpoints with an Accept header indicating the aggregated discovery
resource: Accept: application/json;v=v2;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList .

Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an
unaggregated discovery document.

The discovery document for the built-in resources can be found in the Kubernetes GitHub repository. This Github document can be
used as a reference of the base set of the available resources if a Kubernetes cluster is not available to query.

https://kubernetes.io/docs/concepts/_print/ 37/609
7/10/24, 9:28 AM Concepts | Kubernetes

The endpoint also supports ETag and protobuf encoding.

Unaggregated discovery
Without discovery aggregation, discovery is published in levels, with the root endpoints publishing discovery information for
downstream documents.

A list of all group versions supported by a cluster is published at the /api and /apis endpoints. Example:

{
"kind": "APIGroupList",
"apiVersion": "v1",
"groups": [
{
"name": "apiregistration.k8s.io",
"versions": [
{
"groupVersion": "apiregistration.k8s.io/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "apiregistration.k8s.io/v1",
"version": "v1"
}
},
{
"name": "apps",
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
}
},
...
}

Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example:
/apis/rbac.authorization.k8s.io/v1alpha1 ), which advertises the list of resources served under a particular group version. These
endpoints are used by kubectl to fetch the list of resources supported by a cluster.

OpenAPI interface definition


For details about the OpenAPI specifications, see the OpenAPI documentation.

Kubernetes serves both OpenAPI v2.0 and OpenAPI v3.0. OpenAPI v3 is the preferred method of accessing the OpenAPI because it
offers a more comprehensive (lossless) representation of Kubernetes resources. Due to limitations of OpenAPI version 2, certain
fields are dropped from the published OpenAPI including but not limited to default , nullable , oneOf .

OpenAPI V2
The Kubernetes API server serves an aggregated OpenAPI v2 spec via the /openapi/v2 endpoint. You can request the response
format using request headers as follows:

Header Possible values Notes

Accept- gzip not supplying this header is also


Encoding acceptable

https://kubernetes.io/docs/concepts/_print/ 38/609
7/10/24, 9:28 AM Concepts | Kubernetes

Header Possible values Notes

Accept application/com.github.proto- mainly for intra-cluster use


[email protected]+protobuf

application/json default

* serves application/json

OpenAPI V3

ⓘ FEATURE STATE: Kubernetes v1.27 [stable]

Kubernetes supports publishing a description of its APIs as OpenAPI v3.

A discovery endpoint /openapi/v3 is provided to see a list of all group/versions available. This endpoint only returns JSON. These
group/versions are provided in the following format:

{
"paths": {
...,
"api/v1": {
"serverRelativeURL": "/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864A
},
"apis/admissionregistration.k8s.io/v1": {
"serverRelativeURL": "/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CD
},
....
}
}

The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching
headers are also set by the API server for that purpose ( Expires to 1 year in the future, and Cache-Control to immutable ). When an
obsolete URL is used, the API server returns a redirect to the newest URL.

The Kubernetes API server publishes an OpenAPI v3 spec per Kubernetes group version at the /openapi/v3/apis/<group>/<version>?
hash=<hash> endpoint.

Refer to the table below for accepted request headers.

Header Possible values Notes

Accept- gzip not supplying this header is also


Encoding acceptable

Accept application/com.github.proto- mainly for intra-cluster use


[email protected]+protobuf

application/json default

* serves application/json

A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3 .

Kubernetes 1.30 publishes OpenAPI v2.0 and v3.0; there are no plans to support 3.1 in the near future.

https://kubernetes.io/docs/concepts/_print/ 39/609
7/10/24, 9:28 AM Concepts | Kubernetes

Protobuf serialization
Kubernetes implements an alternative Protobuf based serialization format that is primarily intended for intra-cluster
communication. For more information about this format, see the Kubernetes Protobuf serialization design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go packages that define the API objects.

Persistence
Kubernetes stores the serialized state of objects by writing them into etcd.

API groups and versioning


To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a
different API path, such as /api/v1 or /apis/rbac.authorization.k8s.io/v1alpha1 .

Versioning is done at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of
system resources and behavior, and to enable controlling access to end-of-life and/or experimental APIs.

To make it easier to evolve and to extend its API, Kubernetes implements API groups that can be enabled or disabled.

API resources are distinguished by their API group, resource type, namespace (for namespaced resources), and name. The API server
handles the conversion between API versions transparently: all the different versions are actually representations of the same
persisted data. The API server may serve the same underlying data through multiple API versions.

For example, suppose there are two API versions, v1 and v1beta1 , for the same resource. If you originally created an object using
the v1beta1 version of its API, you can later read, update, or delete that object using either the v1beta1 or the v1 API version, until
the v1beta1 version is deprecated and removed. At that point you can continue accessing and modifying the object using the v1
API.

API changes
Any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, Kubernetes
has designed the Kubernetes API to continuously change and grow. The Kubernetes project aims to not break compatibility with
existing clients, and to maintain that compatibility for a length of time so that other projects have an opportunity to adapt.

In general, new API resources and new resource fields can be added often and frequently. Elimination of resources or fields requires
following the API deprecation policy.

Kubernetes makes a strong commitment to maintain compatibility for official Kubernetes APIs once they reach general availability
(GA), typically at API version v1 . Additionally, Kubernetes maintains compatibility with data persisted via beta API versions of official
Kubernetes APIs, and ensures that data can be converted and accessed via GA API versions when the feature goes stable.

If you adopt a beta API version, you will need to transition to a subsequent beta or stable API version once the API graduates. The
best time to do this is while the beta API is in its deprecation period, since objects are simultaneously accessible via both API
versions. Once the beta API completes its deprecation period and is no longer served, the replacement API version must be used.

Note:
Although Kubernetes also aims to maintain compatibility for alpha APIs versions, in some circumstances this is not possible. If
you use any alpha API versions, check the release notes for Kubernetes when upgrading your cluster, in case the API did change
in incompatible ways that require deleting all existing alpha objects prior to upgrade.

Refer to API versions reference for more details on the API version level definitions.

API Extension
The Kubernetes API can be extended in one of two ways:

1. Custom resources let you declaratively define how the API server should provide your chosen resource API.
2. You can also extend the Kubernetes API by implementing an aggregation layer.

https://kubernetes.io/docs/concepts/_print/ 40/609
7/10/24, 9:28 AM Concepts | Kubernetes

What's next
Learn how to extend the Kubernetes API by adding your own CustomResourceDefinition.
Controlling Access To The Kubernetes API describes how the cluster manages authentication and authorization for API access.
Learn about API endpoints, resource types and samples by reading API Reference.
Learn about what constitutes a compatible change, and how to change the API, from API changes.

https://kubernetes.io/docs/concepts/_print/ 41/609
7/10/24, 9:28 AM Concepts | Kubernetes

2 - Cluster Architecture
The architectural concepts behind Kubernetes.

CLUSTER
CONTROL PLANE

cloud-control-manager CLOUD PROVIDER API

Node 1 Node 2
etcd kube-api-server

kubelet kube-proxy kubelet kube-proxy

scheduler Controller Manager

pod pod
kube-scheduler kube-controller-manager
pod

pod

CRI CRI

Kubernetes cluster architecture

2.1 - Nodes
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine,
depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.

Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node.

The components on a node include the kubelet, a container runtime, and the kube-proxy.

Management
There are two main ways to have Nodes added to the API server:

1. The kubelet on a node self-registers to the control plane


2. You (or another human user) manually add a Node object

After you create a Node object, or the kubelet on a node self-registers, the control plane checks whether the new Node object is
valid. For example, if you try to create a Node from the following JSON manifest:

https://kubernetes.io/docs/concepts/_print/ 42/609
7/10/24, 9:28 AM Concepts | Kubernetes

{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.240.79.157",
"labels": {
"name": "my-first-k8s-node"
}
}
}

Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server
that matches the metadata.name field of the Node. If the node is healthy (i.e. all necessary services are running), then it is eligible to
run a Pod. Otherwise, that node is ignored for any cluster activity until it becomes healthy.

Note:
Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy.

You, or a controller, must explicitly delete the Node object to stop that health checking.

The name of a Node object must be a valid DNS subdomain name.

Node name uniqueness


The name identifies a Node. Two Nodes cannot have the same name at the same time. Kubernetes also assumes that a resource
with the same name is the same object. In case of a Node, it is implicitly assumed that an instance using the same name will have
the same state (e.g. network settings, root disk contents) and attributes like node labels. This may lead to inconsistencies if an
instance was modified without changing its name. If the Node needs to be replaced or updated significantly, the existing Node object
needs to be removed from API server first and re-added after the update.

Self-registration of Nodes
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the
preferred pattern, used by most distros.

For self-registration, the kubelet is started with the following options:

--kubeconfig - Path to credentials to authenticate itself to the API server.

--cloud-provider - How to talk to a cloud provider to read metadata about itself.

--register-node - Automatically register with the API server.

--register-with-taints - Register the node with the given list of taints (comma separated <key>=<value>:<effect> ).

No-op if register-node is false.

- Optional comma-separated list of the IP addresses for the node. You can only specify a single address for each
--node-ip
address family. For example, in a single-stack IPv4 cluster, you set this value to be the IPv4 address that the kubelet should use
for the node. See configure IPv4/IPv6 dual stack for details of running a dual-stack cluster.

If you don't provide this argument, the kubelet uses the node's default IPv4 address, if any; if the node has no IPv4 addresses
then the kubelet uses the node's default IPv6 address.

- Labels to add when registering the node in the cluster (see label restrictions enforced by the NodeRestriction
--node-labels
admission plugin).

--node-status-update-frequency - Specifies how often kubelet posts its node status to the API server.

When the Node authorization mode and NodeRestriction admission plugin are enabled, kubelets are only authorized to
create/modify their own Node resource.

https://kubernetes.io/docs/concepts/_print/ 43/609
7/10/24, 9:28 AM Concepts | Kubernetes

Note:
As mentioned in the Node name uniqueness section, when Node configuration needs to be updated, it is a good practice to re-
register the node with the API server. For example, if the kubelet is being restarted with a new set of --node-labels , but the
same Node name is used, the change will not take effect, as labels are only set (or modified) upon Node registration with the API
server.

Pods already scheduled on the Node may misbehave or cause issues if the Node configuration will be changed on kubelet
restart. For example, already running Pod may be tainted against the new labels assigned to the Node, while other Pods, that
are incompatible with that Pod will be scheduled based on this new label. Node re-registration ensures all Pods will be drained
and properly re-scheduled.

Manual Node administration


You can create and modify Node objects using kubectl.

When you want to create Node objects manually, set the kubelet flag --register-node=false .

You can modify Node objects regardless of the setting of --register-node . For example, you can set labels on an existing Node or
mark it unschedulable.

You can use labels on Nodes in conjunction with node selectors on Pods to control scheduling. For example, you can constrain a Pod
to only be eligible to run on a subset of the available nodes.

Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node but does not affect existing Pods
on the Node. This is useful as a preparatory step before a node reboot or other maintenance.

To mark a Node unschedulable, run:

kubectl cordon $NODENAME

See Safely Drain a Node for more details.

Note:
Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local
services that should run on the Node even if it is being drained of workload applications.

Node status
A Node's status contains the following information:

Addresses
Conditions
Capacity and Allocatable
Info

You can use kubectl to view a Node's status and other details:

kubectl describe node <insert-node-name-here>

See Node Status for more details.

Node heartbeats
Heartbeats, sent by Kubernetes nodes, help your cluster determine the availability of each node, and to take action when failures
are detected.
https://kubernetes.io/docs/concepts/_print/ 44/609
7/10/24, 9:28 AM Concepts | Kubernetes

For nodes there are two forms of heartbeats:

Updates to the .status of a Node.


Lease objects within the kube-node-lease namespace. Each Node has an associated Lease object.

Node controller
The node controller is a Kubernetes control plane component that manages various aspects of nodes.

The node controller has multiple roles in a node's life. The first is assigning a CIDR block to the node when it is registered (if CIDR
assignment is turned on).

The second is keeping the node controller's internal list of nodes up to date with the cloud provider's list of available machines.
When running in a cloud environment and whenever a node is unhealthy, the node controller asks the cloud provider if the VM for
that node is still available. If not, the node controller deletes the node from its list of nodes.

The third is monitoring the nodes' health. The node controller is responsible for:

In the case that a node becomes unreachable, updating the Ready condition in the Node's .status field. In this case the node
controller sets the Ready condition to Unknown .
If a node remains unreachable: triggering API-initiated eviction for all of the Pods on the unreachable node. By default, the
node controller waits 5 minutes between marking the node as Unknown and submitting the first eviction request.

By default, the node controller checks the state of each node every 5 seconds. This period can be configured using the --node-
monitor-period flag on the kube-controller-manager component.

Rate limits on eviction


In most cases, the node controller limits the eviction rate to --node-eviction-rate (default 0.1) per second, meaning it won't evict
pods from more than 1 node per 10 seconds.

The node eviction behavior changes when a node in a given availability zone becomes unhealthy. The node controller checks what
percentage of nodes in the zone are unhealthy (the Ready condition is Unknown or False ) at the same time:

If the fraction of unhealthy nodes is at least --unhealthy-zone-threshold (default 0.55), then the eviction rate is reduced.
If the cluster is small (i.e. has less than or equal to --large-cluster-size-threshold nodes - default 50), then evictions are
stopped.
Otherwise, the eviction rate is reduced to --secondary-node-eviction-rate (default 0.01) per second.

The reason these policies are implemented per availability zone is because one availability zone might become partitioned from the
control plane while the others remain connected. If your cluster does not span multiple cloud provider availability zones, then the
eviction mechanism does not take per-zone unavailability into account.

A key reason for spreading your nodes across availability zones is so that the workload can be shifted to healthy zones when one
entire zone goes down. Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at the normal rate of --node-
eviction-rate . The corner case is when all zones are completely unhealthy (none of the nodes in the cluster are healthy). In such a
case, the node controller assumes that there is some problem with connectivity between the control plane and the nodes, and
doesn't perform any evictions. (If there has been an outage and some nodes reappear, the node controller does evict pods from the
remaining nodes that are unhealthy or unreachable).

The node controller is also responsible for evicting pods running on nodes with NoExecute taints, unless those pods tolerate that
taint. The node controller also adds taints corresponding to node problems like node unreachable or not ready. This means that the
scheduler won't place Pods onto unhealthy nodes.

Resource capacity tracking


Node objects track information about the Node's resource capacity: for example, the amount of memory available and the number
of CPUs. Nodes that self register report their capacity during registration. If you manually add a Node, then you need to set the
node's capacity information when you add it.

The Kubernetes scheduler ensures that there are enough resources for all the Pods on a Node. The scheduler checks that the sum of
the requests of containers on the node is no greater than the node's capacity. That sum of requests includes all containers managed
by the kubelet, but excludes any containers started directly by the container runtime, and also excludes any processes running
https://kubernetes.io/docs/concepts/_print/ 45/609
7/10/24, 9:28 AM Concepts | Kubernetes

outside of the kubelet's control.

Note:
If you want to explicitly reserve resources for non-Pod processes, see reserve resources for system daemons.

Node topology
ⓘ FEATURE STATE: Kubernetes v1.27 [stable]

If you have enabled the TopologyManager feature gate, then the kubelet can use topology hints when making resource assignment
decisions. See Control Topology Management Policies on a Node for more information.

Swap memory management


ⓘ FEATURE STATE: Kubernetes v1.30 [beta]

To enable swap on a node, the NodeSwap feature gate must be enabled on the kubelet (default is true), and the --fail-swap-on
command line flag or failSwapOn configuration setting must be set to false. To allow Pods to utilize swap, swapBehavior should not
be set to NoSwap (which is the default behavior) in the kubelet config.

Warning:
When the memory swap feature is turned on, Kubernetes data such as the content of Secret objects that were written to tmpfs
now could be swapped to disk.

A user can also optionally configure memorySwap.swapBehavior in order to specify how a node will use swap memory. For example,

memorySwap:
swapBehavior: LimitedSwap

(default): Kubernetes workloads will not use swap.


NoSwap

LimitedSwap : The utilization of swap memory by Kubernetes workloads is subject to limitations. Only Pods of Burstable QoS are
permitted to employ swap.

If configuration for memorySwap is not specified and the feature gate is enabled, by default the kubelet will apply the same behaviour
as the NoSwap setting.

With LimitedSwap , Pods that do not fall under the Burstable QoS classification (i.e. BestEffort / Guaranteed Qos Pods) are prohibited
from utilizing swap memory. To maintain the aforementioned security and node health guarantees, these Pods are not permitted to
use swap memory when LimitedSwap is in effect.

Prior to detailing the calculation of the swap limit, it is necessary to define the following terms:

: The total amount of physical memory available on the node.


nodeTotalMemory

totalPodsSwapAvailable : The total amount of swap memory on the node that is available for use by Pods (some swap memory
may be reserved for system use).
containerMemoryRequest : The container's memory request.

Swap limitation is configured as: (containerMemoryRequest / nodeTotalMemory) * totalPodsSwapAvailable .

It is important to note that, for containers within Burstable QoS Pods, it is possible to opt-out of swap usage by specifying memory
requests that are equal to memory limits. Containers configured in this manner will not have access to swap memory.

Swap is supported only with cgroup v2, cgroup v1 is not supported.

https://kubernetes.io/docs/concepts/_print/ 46/609
7/10/24, 9:28 AM Concepts | Kubernetes

For more information, and to assist with testing and provide feedback, please see the blog-post about Kubernetes 1.28: NodeSwap
graduates to Beta1, KEP-2400 and its design proposal.

What's next
Learn more about the following:

Components that make up a node.


API definition for Node.
Node section of the architecture design document.
Graceful/non-graceful node shutdown.
Cluster autoscaling to manage the number and size of nodes in your cluster.
Taints and Tolerations.
Node Resource Managers.
Resource Management for Windows nodes.

https://kubernetes.io/docs/concepts/_print/ 47/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.2 - Communication between Nodes and the Control


Plane
This document catalogs the communication paths between the API server and the Kubernetes cluster. The intent is to allow users to
customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on
fully public IPs on a cloud provider).

Node to Control Plane


Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the API server. None of
the other control plane components are designed to expose remote services. The API server is configured to listen for remote
connections on a secure HTTPS port (typically 443) with one or more forms of client authentication enabled. One or more forms of
authorization should be enabled, especially if anonymous requests or service account tokens are allowed.

Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the API server
along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client
certificate. See kubelet TLS bootstrapping for automated provisioning of kubelet client certificates.

Pods that wish to connect to the API server can do so securely by leveraging a service account so that Kubernetes will automatically
inject the public root certificate and a valid bearer token into the pod when it is instantiated. The kubernetes service (in default
namespace) is configured with a virtual IP address that is redirected (via kube-proxy ) to the HTTPS endpoint on the API server.

The control plane components also communicate with the API server over the secure port.

As a result, the default operating mode for connections from the nodes and pod running on the nodes to the control plane is
secured by default and can run over untrusted and/or public networks.

Control plane to node


There are two primary communication paths from the control plane (the API server) to the nodes. The first is from the API server to
the kubelet process which runs on each node in the cluster. The second is from the API server to any node, pod, or service through
the API server's proxy functionality.

API server to kubelet


The connections from the API server to the kubelet are used for:

Fetching logs for pods.


Attaching (usually through kubectl ) to running pods.
Providing the kubelet's port-forwarding functionality.

These connections terminate at the kubelet's HTTPS endpoint. By default, the API server does not verify the kubelet's serving
certificate, which makes the connection subject to man-in-the-middle attacks and unsafe to run over untrusted and/or public
networks.

To verify this connection, use the --kubelet-certificate-authority flag to provide the API server with a root certificate bundle to use
to verify the kubelet's serving certificate.

If that is not possible, use SSH tunneling between the API server and kubelet if required to avoid connecting over an untrusted or
public network.

Finally, Kubelet authentication and/or authorization should be enabled to secure the kubelet API.

API server to nodes, pods, and services


The connections from the API server to a node, pod, or service default to plain HTTP connections and are therefore neither
authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing https: to the node, pod, or service
name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while
the connection will be encrypted, it will not provide any guarantees of integrity. These connections are not currently safe to run
over untrusted or public networks.

https://kubernetes.io/docs/concepts/_print/ 48/609
7/10/24, 9:28 AM Concepts | Kubernetes

SSH tunnels
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the API server
initiates an SSH tunnel to each node in the cluster (connecting to the SSH server listening on port 22) and passes all traffic destined
for a kubelet, node, pod, or service through the tunnel. This tunnel ensures that the traffic is not exposed outside of the network in
which the nodes are running.

Note:
SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity
service is a replacement for this communication channel.

Konnectivity service

ⓘ FEATURE STATE: Kubernetes v1.18 [beta]

As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster
communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the
Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the
network connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.

Follow the Konnectivity service task to set up the Konnectivity service in your cluster.

What's next
Read about the Kubernetes control plane components
Learn more about Hubs and Spoke model
Learn how to Secure a Cluster
Learn more about the Kubernetes API
Set up Konnectivity service
Use Port Forwarding to Access Applications in a Cluster
Learn how to Fetch logs for Pods, use kubectl port-forward

https://kubernetes.io/docs/concepts/_print/ 49/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.3 - Controllers
In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system.

Here is one example of a control loop: a thermostat in a room.

When you set the temperature, that's telling the thermostat about your desired state. The actual room temperature is the current
state. The thermostat acts to bring the current state closer to the desired state, by turning equipment on or off.

In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each
controller tries to move the current cluster state closer to the desired state.

Controller pattern
A controller tracks at least one Kubernetes resource type. These objects have a spec field that represents the desired state. The
controller(s) for that resource are responsible for making the current state come closer to that desired state.

The controller might carry the action out itself; more commonly, in Kubernetes, a controller will send messages to the API server that
have useful side effects. You'll see examples of this below.

Control via API server


The Job controller is an example of a Kubernetes built-in controller. Built-in controllers manage state by interacting with the cluster
API server.

Job is a Kubernetes resource that runs a Pod, or perhaps several Pods, to carry out a task and then stop.

(Once scheduled, Pod objects become part of the desired state for a kubelet).

When the Job controller sees a new task it makes sure that, somewhere in your cluster, the kubelets on a set of Nodes are running
the right number of Pods to get the work done. The Job controller does not run any Pods or containers itself. Instead, the Job
controller tells the API server to create or remove Pods. Other components in the control plane act on the new information (there
are new Pods to schedule and run), and eventually the work is done.

After you create a new Job, the desired state is for that Job to be completed. The Job controller makes the current state for that Job
be nearer to your desired state: creating Pods that do the work you wanted for that Job, so that the Job is closer to completion.

Controllers also update the objects that configure them. For example: once the work is done for a Job, the Job controller updates
that Job object to mark it Finished .

(This is a bit like how some thermostats turn a light off to indicate that your room is now at the temperature you set).

Direct control
In contrast with Job, some controllers need to make changes to things outside of your cluster.

For example, if you use a control loop to make sure there are enough Nodes in your cluster, then that controller needs something
outside the current cluster to set up new Nodes when needed.

Controllers that interact with external state find their desired state from the API server, then communicate directly with an external
system to bring the current state closer in line.

(There actually is a controller that horizontally scales the nodes in your cluster.)

The important point here is that the controller makes some changes to bring about your desired state, and then reports the current
state back to your cluster's API server. Other control loops can observe that reported data and take their own actions.

In the thermostat example, if the room is very cold then a different controller might also turn on a frost protection heater. With
Kubernetes clusters, the control plane indirectly works with IP address management tools, storage services, cloud provider APIs, and
other services by extending Kubernetes to implement that.

Desired versus current state


Kubernetes takes a cloud-native view of systems, and is able to handle constant change.
https://kubernetes.io/docs/concepts/_print/ 50/609
7/10/24, 9:28 AM Concepts | Kubernetes

Your cluster could be changing at any point as work happens and control loops automatically fix failures. This means that,
potentially, your cluster never reaches a stable state.

As long as the controllers for your cluster are running and able to make useful changes, it doesn't matter if the overall state is stable
or not.

Design
As a tenet of its design, Kubernetes uses lots of controllers that each manage a particular aspect of cluster state. Most commonly, a
particular control loop (controller) uses one kind of resource as its desired state, and has a different kind of resource that it manages
to make that desired state happen. For example, a controller for Jobs tracks Job objects (to discover new work) and Pod objects (to
run the Jobs, and then to see when the work is finished). In this case something else creates the Jobs, whereas the Job controller
creates Pods.

It's useful to have simple controllers rather than one, monolithic set of control loops that are interlinked. Controllers can fail, so
Kubernetes is designed to allow for that.

Note:
There can be several controllers that create or update the same kind of object. Behind the scenes, Kubernetes controllers make
sure that they only pay attention to the resources linked to their controlling resource.

For example, you can have Deployments and Jobs; these both create Pods. The Job controller does not delete the Pods that your
Deployment created, because there is information (labels) the controllers can use to tell those Pods apart.

Ways of running controllers


Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager. These built-in controllers provide
important core behaviors.

The Deployment controller and Job controller are examples of controllers that come as part of Kubernetes itself ("built-in"
controllers). Kubernetes lets you run a resilient control plane, so that if any of the built-in controllers were to fail, another part of the
control plane will take over the work.

You can find controllers that run outside the control plane, to extend Kubernetes. Or, if you want, you can write a new controller
yourself. You can run your own controller as a set of Pods, or externally to Kubernetes. What fits best will depend on what that
particular controller does.

What's next
Read about the Kubernetes control plane
Discover some of the basic Kubernetes objects
Learn more about the Kubernetes API
If you want to write your own controller, see Kubernetes extension patterns and the sample-controller repository.

https://kubernetes.io/docs/concepts/_print/ 51/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.4 - Leases
Distributed systems often have a need for leases, which provide a mechanism to lock shared resources and coordinate activity
between members of a set. In Kubernetes, the lease concept is represented by Lease objects in the coordination.k8s.io API Group,
which are used for system-critical capabilities such as node heartbeats and component-level leader election.

Node heartbeats
Kubernetes uses the Lease API to communicate kubelet node heartbeats to the Kubernetes API server. For every Node , there is a
Lease object with a matching name in the kube-node-lease namespace. Under the hood, every kubelet heartbeat is an update
request to this Lease object, updating the spec.renewTime field for the Lease. The Kubernetes control plane uses the time stamp of
this field to determine the availability of this Node .

See Node Lease objects for more details.

Leader election
Kubernetes also uses Leases to ensure only one instance of a component is running at any given time. This is used by control plane
components like kube-controller-manager and kube-scheduler in HA configurations, where only one instance of the component
should be actively running while the other instances are on stand-by.

API server identity


ⓘ FEATURE STATE: Kubernetes v1.26 [beta]

Starting in Kubernetes v1.26, each kube-apiserver uses the Lease API to publish its identity to the rest of the system. While not
particularly useful on its own, this provides a mechanism for clients to discover how many instances of kube-apiserver are operating
the Kubernetes control plane. Existence of kube-apiserver leases enables future capabilities that may require coordination between
each kube-apiserver.

You can inspect Leases owned by each kube-apiserver by checking for lease objects in the kube-system namespace with the name
kube-apiserver-<sha256-hash> . Alternatively you can use the label selector apiserver.kubernetes.io/identity=kube-apiserver :

kubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver

NAME HOLDER A
apiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5
apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4
apiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4

The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be
configured to use a hostname that is unique within the cluster. New instances of kube-apiserver that use the same hostname will
take over existing Leases using a new holder identity, as opposed to instantiating new Lease objects. You can check the hostname
used by kube-apisever by checking the value of the kubernetes.io/hostname label:

kubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml

https://kubernetes.io/docs/concepts/_print/ 52/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
creationTimestamp: "2023-07-02T13:16:48Z"
labels:
apiserver.kubernetes.io/identity: kube-apiserver
kubernetes.io/hostname: master-1
name: apiserver-07a5ea9b9b072c4a5f3d1c3702
namespace: kube-system
resourceVersion: "334899"
uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1
spec:
holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05
leaseDurationSeconds: 3600
renewTime: "2023-07-04T21:58:48.065888Z"

Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.

You can disable API server identity leases by disabling the APIServerIdentity feature gate.

Workloads
Your own workload can define its own use of Leases. For example, you might run a custom controller where a primary or leader
member performs operations that its peers do not. You define a Lease so that the controller replicas can select or elect a leader,
using the Kubernetes API for coordination. If you do use a Lease, it's a good practice to define a name for the Lease that is obviously
linked to the product or component. For example, if you have a component named Example Foo, use a Lease named example-foo .

If a cluster operator or another end user could deploy multiple instances of a component, select a name prefix and pick a
mechanism (such as hash of the name of the Deployment) to avoid name collisions for the Leases.

You can use another approach so long as it achieves the same outcome: different software products do not conflict with one
another.

https://kubernetes.io/docs/concepts/_print/ 53/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.5 - Cloud Controller Manager


ⓘ FEATURE STATE: Kubernetes v1.11 [beta]

Cloud infrastructure technologies let you run Kubernetes on public, private, and hybrid clouds. Kubernetes believes in automated,
API-driven infrastructure without tight coupling between components.

The cloud-controller-manager is a Kubernetes control plane component that embeds cloud-specific control logic. The cloud
controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that
cloud platform from components that only interact with your cluster.

By decoupling the interoperability logic between Kubernetes and the underlying cloud infrastructure, the cloud-controller-manager
component enables cloud providers to release features at a different pace compared to the main Kubernetes project.

The cloud-controller-manager is structured using a plugin mechanism that allows different cloud providers to integrate their
platforms with Kubernetes.

Design
Kubernetes cluster
API server
api

Cloud controller
c-m
c-m c-c-m manager
c-c-m
c-m c-c-m (optional) c-c-m

Controller
manager c-m

etcd
api
Node Node (persistence store) etcd
api Node
api

kubelet
kubelet

kubelet kubelet kubelet kube-proxy


etcd k-proxy

sched
sched
sched

Scheduler
sched

Control Plane k-proxy k-proxy k-proxy


Control plane

Node

The cloud controller manager runs in the control plane as a replicated set of processes (usually, these are containers in Pods). Each
cloud-controller-manager implements multiple controllers in a single process.

Note:
You can also run the cloud controller manager as a Kubernetes addon rather than as part of the control plane.

Cloud controller manager functions


The controllers inside the cloud controller manager include:

Node controller
The node controller is responsible for updating Node objects when new servers are created in your cloud infrastructure. The node
controller obtains information about the hosts running inside your tenancy with the cloud provider. The node controller performs
the following functions:

1. Update a Node object with the corresponding server's unique identifier obtained from the cloud provider API.
2. Annotating and labelling the Node object with cloud-specific information, such as the region the node is deployed into and the
resources (CPU, memory, etc) that it has available.
https://kubernetes.io/docs/concepts/_print/ 54/609
7/10/24, 9:28 AM Concepts | Kubernetes

3. Obtain the node's hostname and network addresses.


4. Verifying the node's health. In case a node becomes unresponsive, this controller checks with your cloud provider's API to see if
the server has been deactivated / deleted / terminated. If the node has been deleted from the cloud, the controller deletes the
Node object from your Kubernetes cluster.

Some cloud provider implementations split this into a node controller and a separate node lifecycle controller.

Route controller
The route controller is responsible for configuring routes in the cloud appropriately so that containers on different nodes in your
Kubernetes cluster can communicate with each other.

Depending on the cloud provider, the route controller might also allocate blocks of IP addresses for the Pod network.

Service controller
Services integrate with cloud infrastructure components such as managed load balancers, IP addresses, network packet filtering,
and target health checking. The service controller interacts with your cloud provider's APIs to set up load balancers and other
infrastructure components when you declare a Service resource that requires them.

Authorization
This section breaks down the access that the cloud controller manager requires on various API objects, in order to perform its
operations.

Node controller
The Node controller only works with Node objects. It requires full access to read and modify Node objects.

v1/Node :

get
list
create
update
patch
watch
delete

Route controller
The route controller listens to Node object creation and configures routes appropriately. It requires Get access to Node objects.

v1/Node :

get

Service controller
The service controller watches for Service object create, update and delete events and then configures Endpoints for those
Services appropriately (for EndpointSlices, the kube-controller-manager manages these on demand).

To access Services, it requires list, and watch access. To update Services, it requires patch and update access.

To set up Endpoints resources for the Services, it requires access to create, list, get, watch, and update.

v1/Service :

list
get
watch
patch
https://kubernetes.io/docs/concepts/_print/ 55/609
7/10/24, 9:28 AM Concepts | Kubernetes

update

Others
The implementation of the core of the cloud controller manager requires access to create Event objects, and to ensure secure
operation, it requires access to create ServiceAccounts.

v1/Event :

create
patch
update

v1/ServiceAccount :

create

The RBAC ClusterRole for the cloud controller manager looks like:

https://kubernetes.io/docs/concepts/_print/ 56/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cloud-controller-manager
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- services
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- update
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- list
- watch
- update

What's next
Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.

https://kubernetes.io/docs/concepts/_print/ 57/609
7/10/24, 9:28 AM Concepts | Kubernetes

To upgrade a HA control plane to use the cloud controller manager, see Migrate Replicated Control Plane To Use Cloud
Controller Manager.

Want to know how to implement your own cloud controller manager, or extend an existing project?

The cloud controller manager uses Go interfaces, specifically, CloudProvider interface defined in cloud.go from
kubernetes/cloud-provider to allow implementations from any cloud to be plugged in.
The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some
scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to
cloud providers are outside the core of Kubernetes and implement the CloudProvider interface.
For more information about developing plugins, see Developing Cloud Controller Manager.

https://kubernetes.io/docs/concepts/_print/ 58/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.6 - About cgroup v2


On Linux, control groups constrain resources that are allocated to processes.

The kubelet and the underlying container runtime need to interface with cgroups to enforce resource management for pods and
containers which includes cpu/memory requests and limits for containerized workloads.

There are two versions of cgroups in Linux: cgroup v1 and cgroup v2. cgroup v2 is the new generation of the cgroup API.

What is cgroup v2?


ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

cgroup v2 is the next version of the Linux cgroup API. cgroup v2 provides a unified control system with enhanced resource
management capabilities.

cgroup v2 offers several improvements over cgroup v1, such as the following:

Single unified hierarchy design in API


Safer sub-tree delegation to containers
Newer features like Pressure Stall Information
Enhanced resource allocation management and isolation across multiple resources
Unified accounting for different types of memory allocations (network memory, kernel memory, etc)
Accounting for non-immediate resource changes such as page cache write backs

Some Kubernetes features exclusively use cgroup v2 for enhanced resource management and isolation. For example, the
MemoryQoS feature improves memory QoS and relies on cgroup v2 primitives.

Using cgroup v2
The recommended way to use cgroup v2 is to use a Linux distribution that enables and uses cgroup v2 by default.

To check if your distribution uses cgroup v2, refer to Identify cgroup version on Linux nodes.

Requirements
cgroup v2 has the following requirements:

OS distribution enables cgroup v2


Linux Kernel version is 5.8 or later
Container runtime supports cgroup v2. For example:
containerd v1.4 and later
cri-o v1.20 and later
The kubelet and the container runtime are configured to use the systemd cgroup driver

Linux Distribution cgroup v2 support


For a list of Linux distributions that use cgroup v2, refer to the cgroup v2 documentation

Container Optimized OS (since M97)


Ubuntu (since 21.10, 22.04+ recommended)
Debian GNU/Linux (since Debian 11 bullseye)
Fedora (since 31)
Arch Linux (since April 2021)
RHEL and RHEL-like distributions (since 9)

To check if your distribution is using cgroup v2, refer to your distribution's documentation or follow the instructions in Identify the
cgroup version on Linux nodes.

https://kubernetes.io/docs/concepts/_print/ 59/609
7/10/24, 9:28 AM Concepts | Kubernetes

You can also enable cgroup v2 manually on your Linux distribution by modifying the kernel cmdline boot arguments. If your
distribution uses GRUB, systemd.unified_cgroup_hierarchy=1 should be added in GRUB_CMDLINE_LINUX under /etc/default/grub ,
followed by sudo update-grub . However, the recommended approach is to use a distribution that already enables cgroup v2 by
default.

Migrating to cgroup v2
To migrate to cgroup v2, ensure that you meet the requirements, then upgrade to a kernel version that enables cgroup v2 by
default.

The kubelet automatically detects that the OS is running on cgroup v2 and performs accordingly with no additional configuration
required.

There should not be any noticeable difference in the user experience when switching to cgroup v2, unless users are accessing the
cgroup file system directly, either on the node or from within the containers.

cgroup v2 uses a different API than cgroup v1, so if there are any applications that directly access the cgroup file system, they need
to be updated to newer versions that support cgroup v2. For example:

Some third-party monitoring and security agents may depend on the cgroup filesystem. Update these agents to versions that
support cgroup v2.
If you run cAdvisor as a stand-alone DaemonSet for monitoring pods and containers, update it to v0.43.0 or later.
If you deploy Java applications, prefer to use versions which fully support cgroup v2:
OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later
IBM Semeru Runtimes: 8.0.382.0, 11.0.20.0, 17.0.8.0, and later
IBM Java: 8.0.8.6 and later
If you are using the uber-go/automaxprocs package, make sure the version you use is v1.5.1 or higher.

Identify the cgroup version on Linux Nodes


The cgroup version depends on the Linux distribution being used and the default cgroup version configured on the OS. To check
which cgroup version your distribution uses, run the stat -fc %T /sys/fs/cgroup/ command on the node:

stat -fc %T /sys/fs/cgroup/

For cgroup v2, the output is cgroup2fs .

For cgroup v1, the output is tmpfs.

What's next
Learn more about cgroups
Learn more about container runtime
Learn more about cgroup drivers

https://kubernetes.io/docs/concepts/_print/ 60/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.7 - Container Runtime Interface (CRI)


The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to
recompile the cluster components.

You need a working container runtime on each Node in your cluster, so that the kubelet can launch Pods and their containers.

The Container Runtime Interface (CRI) is the main protocol for the communication between the kubelet and Container Runtime.

The Kubernetes Container Runtime Interface (CRI) defines the main gRPC protocol for the communication between the node
components kubelet and container runtime.

The API
ⓘ FEATURE STATE: Kubernetes v1.23 [stable]

The kubelet acts as a client when connecting to the container runtime via gRPC. The runtime and image service endpoints have to be
available in the container runtime, which can be configured separately within the kubelet by using the --image-service-endpoint
command line flags.

For Kubernetes v1.30, the kubelet prefers to use CRI v1 . If a container runtime does not support v1 of the CRI, then the kubelet
tries to negotiate any older supported version. The v1.30 kubelet can also negotiate CRI v1alpha2 , but this version is considered as
deprecated. If the kubelet cannot negotiate a supported CRI version, the kubelet gives up and doesn't register as a node.

Upgrading
When upgrading Kubernetes, the kubelet tries to automatically select the latest CRI version on restart of the component. If that fails,
then the fallback will take place as mentioned above. If a gRPC re-dial was required because the container runtime has been
upgraded, then the container runtime must also support the initially selected version or the redial is expected to fail. This requires a
restart of the kubelet.

What's next
Learn more about the CRI protocol definition

https://kubernetes.io/docs/concepts/_print/ 61/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.8 - Garbage Collection


Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up cluster resources. This allows the
clean up of resources like the following:

Terminated pods
Completed Jobs
Objects without owner references
Unused containers and container images
Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete
Stale or expired CertificateSigningRequests (CSRs)
Nodes deleted in the following scenarios:
On a cloud when the cluster uses a cloud controller manager
On-premises when the cluster uses an addon similar to a cloud controller manager
Node Lease objects

Owners and dependents


Many objects in Kubernetes link to each other through owner references. Owner references tell the control plane which objects are
dependent on others. Kubernetes uses owner references to give the control plane, and other API clients, the opportunity to clean up
related resources before deleting an object. In most cases, Kubernetes manages owner references automatically.

Ownership is different from the labels and selectors mechanism that some resources also use. For example, consider a Service that
creates EndpointSlice objects. The Service uses labels to allow the control plane to determine which EndpointSlice objects are used
for that Service. In addition to the labels, each EndpointSlice that is managed on behalf of a Service has an owner reference. Owner
references help different parts of Kubernetes avoid interfering with objects they don’t control.

Note:
Cross-namespace owner references are disallowed by design. Namespaced dependents can specify cluster-scoped or
namespaced owners. A namespaced owner must exist in the same namespace as the dependent. If it does not, the owner
reference is treated as absent, and the dependent is subject to deletion once all owners are verified absent.

Cluster-scoped dependents can only specify cluster-scoped owners. In v1.20+, if a cluster-scoped dependent specifies a
namespaced kind as an owner, it is treated as having an unresolvable owner reference, and is not able to be garbage collected.

In v1.20+, if the garbage collector detects an invalid cross-namespace ownerReference , or a cluster-scoped dependent with an
ownerReference referencing a namespaced kind, a warning Event with a reason of OwnerRefInvalidNamespace and an
involvedObject of the invalid dependent is reported. You can check for that kind of Event by running kubectl get events -A --
field-selector=reason=OwnerRefInvalidNamespace .

Cascading deletion
Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a
ReplicaSet. When you delete an object, you can control whether Kubernetes deletes the object's dependents automatically, in a
process called cascading deletion. There are two types of cascading deletion, as follows:

Foreground cascading deletion


Background cascading deletion

You can also control how and when garbage collection deletes resources that have owner references using Kubernetes finalizers.

Foreground cascading deletion


In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state. In this state, the following
happens to the owner object:

The Kubernetes API server sets the object's metadata.deletionTimestamp field to the time the object was marked for deletion.
The Kubernetes API server also sets the metadata.finalizers field to foregroundDeletion .
https://kubernetes.io/docs/concepts/_print/ 62/609
7/10/24, 9:28 AM Concepts | Kubernetes

The object remains visible through the Kubernetes API until the deletion process is complete.

After the owner object enters the deletion in progress state, the controller deletes the dependents. After deleting all the dependent
objects, the controller deletes the owner object. At this point, the object is no longer visible in the Kubernetes API.

During foreground cascading deletion, the only dependents that block owner deletion are those that have the
ownerReference.blockOwnerDeletion=true field. See Use foreground cascading deletion to learn more.

Background cascading deletion


In background cascading deletion, the Kubernetes API server deletes the owner object immediately and the controller cleans up the
dependent objects in the background. By default, Kubernetes uses background cascading deletion unless you manually use
foreground deletion or choose to orphan the dependent objects.

See Use background cascading deletion to learn more.

Orphaned dependents
When Kubernetes deletes an owner object, the dependents left behind are called orphan objects. By default, Kubernetes deletes
dependent objects. To learn how to override this behaviour, see Delete owner objects and orphan dependents.

Garbage collection of unused containers and images


The kubelet performs garbage collection on unused images every two minutes and on unused containers every minute. You should
avoid using external garbage collection tools, as these can break the kubelet behavior and remove containers that should exist.

To configure options for unused container and image garbage collection, tune the kubelet using a configuration file and change the
parameters related to garbage collection using the KubeletConfiguration resource type.

Container image lifecycle


Kubernetes manages the lifecycle of all images through its image manager, which is part of the kubelet, with the cooperation of
cadvisor. The kubelet considers the following disk usage limits when making garbage collection decisions:

HighThresholdPercent

LowThresholdPercent

Disk usage above the configured HighThresholdPercent value triggers garbage collection, which deletes images in order based on the
last time they were used, starting with the oldest first. The kubelet deletes images until disk usage reaches the LowThresholdPercent
value.

Garbage collection for unused container images

ⓘ FEATURE STATE: Kubernetes v1.30 [beta]

As a beta feature, you can specify the maximum time a local image can be unused for, regardless of disk usage. This is a kubelet
setting that you configure for each node.

To configure the setting, enable the ImageMaximumGCAge feature gate for the kubelet, and also set a value for the imageMaximumGCAge
field in the kubelet configuration file.

The value is specified as a Kubernetes duration; Valid time units for the imageMaximumGCAge field in the kubelet configuration file are:

"ns" for nanoseconds


"us" or "µs" for microseconds
"ms" for milliseconds
"s" for seconds
"m" for minutes
"h" for hours

For example, you can set the configuration field to 12h45m , which means 12 hours and 45 minutes.

https://kubernetes.io/docs/concepts/_print/ 63/609
7/10/24, 9:28 AM Concepts | Kubernetes

Note:
This feature does not track image usage across kubelet restarts. If the kubelet is restarted, the tracked image age is reset,
causing the kubelet to wait the full imageMaximumGCAge duration before qualifying images for garbage collection based on image
age.

Container garbage collection


The kubelet garbage collects unused containers based on the following variables, which you can define:

MinAge : the minimum age at which the kubelet can garbage collect a container. Disable by setting to 0 .
MaxPerPodContainer : the maximum number of dead containers each Pod can have. Disable by setting to less than 0 .
MaxContainers : the maximum number of dead containers the cluster can have. Disable by setting to less than 0 .

In addition to these variables, the kubelet garbage collects unidentified and deleted containers, typically starting with the oldest first.

MaxPerPodContainer and MaxContainers may potentially conflict with each other in situations where retaining the maximum number
of containers per Pod ( MaxPerPodContainer ) would go outside the allowable total of global dead containers ( MaxContainers ). In this
situation, the kubelet adjusts MaxPerPodContainer to address the conflict. A worst-case scenario would be to downgrade
MaxPerPodContainer to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are
removed once they are older than MinAge .

Note:
The kubelet only garbage collects the containers it manages.

Configuring garbage collection


You can tune garbage collection of resources by configuring options specific to the controllers managing those resources. The
following pages show you how to configure garbage collection:

Configuring cascading deletion of Kubernetes objects


Configuring cleanup of finished Jobs

What's next
Learn more about ownership of Kubernetes objects.
Learn more about Kubernetes finalizers.
Learn about the TTL controller that cleans up finished Jobs.

https://kubernetes.io/docs/concepts/_print/ 64/609
7/10/24, 9:28 AM Concepts | Kubernetes

2.9 - Mixed Version Proxy


ⓘ FEATURE STATE: Kubernetes v1.28 [alpha]

Kubernetes 1.30 includes an alpha feature that lets an API Server proxy a resource requests to other peer API servers. This is useful
when there are multiple API servers running different versions of Kubernetes in one cluster (for example, during a long-lived rollout
to a new release of Kubernetes).

This enables cluster administrators to configure highly available clusters that can be upgraded more safely, by directing resource
requests (made during the upgrade) to the correct kube-apiserver. That proxying prevents users from seeing unexpected 404 Not
Found errors that stem from the upgrade process.

This mechanism is called the Mixed Version Proxy.

Enabling the Mixed Version Proxy


Ensure that UnknownVersionInteroperabilityProxy feature gate is enabled when you start the API Server:

kube-apiserver \
--feature-gates=UnknownVersionInteroperabilityProxy=true \
# required command line arguments for this feature
--peer-ca-file=<path to kube-apiserver CA cert>
--proxy-client-cert-file=<path to aggregator proxy cert>,
--proxy-client-key-file=<path to aggregator proxy key>,
--requestheader-client-ca-file=<path to aggregator CA cert>,
# requestheader-allowed-names can be set to blank to allow any Common Name
--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,

# optional flags for this feature


--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`
--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`

# …and other flags as usual

Proxy transport and authentication between API servers


The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-
client-key-file to present its identity that will be verified by its peer (the destination kube-apiserver). The destination API
server verifies that peer connection based on the configuration you specify using the --requestheader-client-ca-file
command line argument.

To authenticate the destination server's serving certs, you must configure a certificate authority bundle by specifying the --
peer-ca-file command line argument to the source API server.

Configuration for peer API server connectivity


To set the network location of a kube-apiserver that peers will use to proxy requests, use the --peer-advertise-ip and --peer-
advertise-port command line arguments to kube-apiserver or specify these fields in the API server configuration file. If these flags
are unspecified, peers will use the value from either --advertise-address or --bind-address command line argument to the kube-
apiserver. If those too, are unset, the host's default interface is used.

Mixed version proxying


When you enable mixed version proxying, the aggregation layer loads a special filter that does the following:

When a resource request reaches an API server that cannot serve that API (either because it is at a version pre-dating the
introduction of the API or the API is turned off on the API server) the API server attempts to send the request to a peer API
server that can serve the requested API. It does so by identifying API groups / versions / resources that the local server doesn't
recognise, and tries to proxy those requests to a peer API server that is capable of handling the request.
https://kubernetes.io/docs/concepts/_print/ 65/609
7/10/24, 9:28 AM Concepts | Kubernetes

If the peer API server fails to respond, the source API server responds with 503 ("Service Unavailable") error.

How it works under the hood


When an API Server receives a resource request, it first checks which API servers can serve the requested resource. This check
happens using the internal StorageVersion API.

If the resource is known to the API server that received the request (for example, GET /api/v1/pods/some-pod ), the request is
handled locally.

If there is no internal StorageVersion object found for the requested resource (for example, GET /my-api/v1/my-resource ) and
the configured APIService specifies proxying to an extension API server, that proxying happens following the usual flow for
extension APIs.

If a valid internal StorageVersion object is found for the requested resource (for example, GET /batch/v1/jobs ) and the API
server trying to handle the request (the handling API server) has the batch API disabled, then the handling API server fetches the
peer API servers that do serve the relevant API group / version / resource ( api/v1/batch in this case) using the information in
the fetched StorageVersion object. The handling API server then proxies the request to one of the matching peer kube-
apiservers that are aware of the requested resource.

If there is no peer known for that API group / version / resource, the handling API server passes the request to its own
handler chain which should eventually return a 404 ("Not Found") response.

If the handling API server has identified and selected a peer API server, but that peer fails to respond (for reasons such as
network connectivity issues, or a data race between the request being received and a controller registering the peer's info
into the control plane), then the handling API server responds with a 503 ("Service Unavailable") error.

https://kubernetes.io/docs/concepts/_print/ 66/609
7/10/24, 9:28 AM Concepts | Kubernetes

3 - Containers
Technology for packaging an application along with its runtime dependencies.

Each container that you run is repeatable; the standardization from having dependencies included means that you get the same
behavior wherever you run it.

Containers decouple applications from the underlying host infrastructure. This makes deployment easier in different cloud or OS
environments.

Each node in a Kubernetes cluster runs the containers that form the Pods assigned to that node. Containers in a Pod are co-located
and co-scheduled to run on the same node.

Container images
A container image is a ready-to-run software package containing everything needed to run an application: the code and any runtime
it requires, application and system libraries, and default values for any essential settings.

Containers are intended to be stateless and immutable: you should not change the code of a container that is already running. If you
have a containerized application and want to make changes, the correct process is to build a new image that includes the change,
then recreate the container to start from the updated image.

Container runtimes
A fundamental component that empowers Kubernetes to run containers effectively. It is responsible for managing the execution and
lifecycle of containers within the Kubernetes environment.

Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container
Runtime Interface).

Usually, you can allow your cluster to pick the default container runtime for a Pod. If you need to use more than one container
runtime in your cluster, you can specify the RuntimeClass for a Pod to make sure that Kubernetes runs those containers using a
particular container runtime.

You can also use RuntimeClass to run different Pods with the same container runtime but with different settings.

https://kubernetes.io/docs/concepts/_print/ 67/609
7/10/24, 9:28 AM Concepts | Kubernetes

3.1 - Images
A container image represents binary data that encapsulates an application and all its software dependencies. Container images are
executable software bundles that can run standalone and that make very well defined assumptions about their runtime
environment.

You typically create a container image of your application and push it to a registry before referring to it in a Pod.

This page provides an outline of the container image concept.

Note:
If you are looking for the container images for a Kubernetes release (such as v1.30, the latest minor release), visit Download
Kubernetes.

Image names
Container images are usually given a name such as pause , example/mycontainer , or kube-apiserver . Images can also include a
registry hostname; for example: fictional.registry.example/imagename , and possibly a port number as well; for example:
fictional.registry.example:10443/imagename .

If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry. You can change this
behaviour by setting default image registry in container runtime configuration.

After the image name part you can add a tag or digest (in the same way you would when using with commands like docker or
podman ). Tags let you identify different versions of the same series of images. Digests are a unique identifier for a specific version of
an image. Digests are hashes of the image's content, and are immutable. Tags can be moved to point to different images, but digests
are fixed.

Image tags consist of lowercase and uppercase letters, digits, underscores ( _ ), periods ( . ), and dashes ( - ). It can be up to 128
characters long. And must follow the next regex pattern: [a-zA-Z0-9_][a-zA-Z0-9._-]{0,127} You can read more about and find
validation regex in the OCI Distribution Specification. If you don't specify a tag, Kubernetes assumes you mean the tag latest .

Image digests consists of a hash algorithm (such as ) and a hash value. For example:
sha256
sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 You can find more information about digests format in the
OCI Image Specification.

Some image name examples that Kubernetes can use are:

- Image name only, no tag or digest. Kubernetes will use Docker public registry and latest tag. (Same as
busybox
docker.io/library/busybox:latest )

busybox:1.32.0 - Image name with tag. Kubernetes will use Docker public registry. (Same as docker.io/library/busybox:1.32.0 )

registry.k8s.io/pause:latest - Image name with a custom registry and latest tag.

registry.k8s.io/pause:3.5 - Image name with a custom registry and non-latest tag.

registry.k8s.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 - Image name with digest.

registry.k8s.io/pause:3.5@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 - Image name with tag and


digest. Only digest will be used for pulling.

Updating images
When you first create a Deployment, StatefulSet, Pod, or other object that includes a Pod template, then by default the pull policy of
all containers in that pod will be set to IfNotPresent if it is not explicitly specified. This policy causes the kubelet to skip pulling an
image if it already exists.

Image pull policy


The imagePullPolicy for a container and the tag of the image affect when the kubelet attempts to pull (download) the specified
image.

Here's a list of the values you can set for imagePullPolicy and the effects these values have:

https://kubernetes.io/docs/concepts/_print/ 68/609
7/10/24, 9:28 AM Concepts | Kubernetes

IfNotPresent

the image is pulled only if it is not already present locally.

Always

every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image
digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise,
the kubelet pulls the image with the resolved digest, and uses that image to launch the container.

Never

the kubelet does not try fetching the image. If the image is somehow already present locally, the kubelet attempts to start the
container; otherwise, startup fails. See pre-pulled images for more details.

The caching semantics of the underlying image provider make even imagePullPolicy: Always efficient, as long as the registry is
reliably accessible. Your container runtime can notice that the image layers already exist on the node so that they don't need to be
downloaded again.

Note:
You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the
image is running and more difficult to roll back properly.

Instead, specify a meaningful tag such as v1.42.0 and/or a digest.

To make sure the Pod always uses the same version of a container image, you can specify the image's digest; replace <image-name>:
<tag> with <image-name>@<digest> (for example, image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 ).

When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a
mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs
the same code every time it starts a container with that image name and digest specified. Specifying an image by digest fixes the
code that you run so that a change at the registry cannot lead to that mix of versions.

There are third-party admission controllers that mutate Pods (and pod templates) when they are created, so that the running
workload is defined based on an image digest rather than a tag. That might be useful if you want to make sure that all your workload
is running the same code no matter what tag changes happen at the registry.

Default image pull policy


When you (or a controller) submit a new Pod to the API server, your cluster sets the imagePullPolicy field when specific conditions
are met:

if you omit the imagePullPolicy field, and you specify the digest for the container image, the imagePullPolicy is automatically
set to IfNotPresent .
if you omit the imagePullPolicy field, and the tag for the container image is :latest , imagePullPolicy is automatically set to
Always ;

if you omit the imagePullPolicy field, and you don't specify the tag for the container image, imagePullPolicy is automatically
set to Always ;
if you omit the imagePullPolicy field, and you specify the tag for the container image that isn't :latest , the imagePullPolicy is
automatically set to IfNotPresent .

Note:
The value of imagePullPolicy of the container is always set when the object is first created, and is not updated if the image's tag
or digest later changes.

For example, if you create a Deployment with an image whose tag is not :latest , and later update that Deployment's image to a
:latest tag, the imagePullPolicy field will not change to Always . You must manually change the pull policy of any object after
its initial creation.

Required image pull


If you would like to always force a pull, you can do one of the following:
https://kubernetes.io/docs/concepts/_print/ 69/609
7/10/24, 9:28 AM Concepts | Kubernetes

Set the imagePullPolicy of the container to Always .


Omit the imagePullPolicy and use :latest as the tag for the image to use; Kubernetes will set the policy to Always when you
submit the Pod.
Omit the imagePullPolicy and the tag for the image to use; Kubernetes will set the policy to Always when you submit the Pod.
Enable the AlwaysPullImages admission controller.

ImagePullBackOff
When a kubelet starts creating containers for a Pod using a container runtime, it might be possible the container is in Waiting state
because of ImagePullBackOff .

The status ImagePullBackOff means that a container could not start because Kubernetes could not pull a container image (for
reasons such as invalid image name, or pulling from a private registry without imagePullSecret ). The BackOff part indicates that
Kubernetes will keep trying to pull the image, with an increasing back-off delay.

Kubernetes raises the delay between each attempt until it reaches a compiled-in limit, which is 300 seconds (5 minutes).

Image pull per runtime class

ⓘ FEATURE STATE: Kubernetes v1.29 [alpha]

Kubernetes includes alpha support for performing image pulls based on the RuntimeClass of a Pod.

If you enable the RuntimeClassInImageCriApi feature gate, the kubelet references container images by a tuple of (image name,
runtime handler) rather than just the image name or digest. Your container runtime may adapt its behavior based on the selected
runtime handler. Pulling images based on runtime class will be helpful for VM based containers like windows hyperV containers.

Serial and parallel image pulls


By default, kubelet pulls images serially. In other words, kubelet sends only one image pull request to the image service at a time.
Other image pull requests have to wait until the one being processed is complete.

Nodes make image pull decisions in isolation. Even when you use serialized image pulls, two different nodes can pull the same
image in parallel.

If you would like to enable parallel image pulls, you can set the field serializeImagePulls to false in the kubelet configuration. With
serializeImagePulls set to false, image pull requests will be sent to the image service immediately, and multiple images will be
pulled at the same time.

When enabling parallel image pulls, please make sure the image service of your container runtime can handle parallel image pulls.

The kubelet never pulls multiple images in parallel on behalf of one Pod. For example, if you have a Pod that has an init container
and an application container, the image pulls for the two containers will not be parallelized. However, if you have two Pods that use
different images, the kubelet pulls the images in parallel on behalf of the two different Pods, when parallel image pulls is enabled.

Maximum parallel image pulls

ⓘ FEATURE STATE: Kubernetes v1.27 [alpha]

When serializeImagePulls is set to false, the kubelet defaults to no limit on the maximum number of images being pulled at the
same time. If you would like to limit the number of parallel image pulls, you can set the field maxParallelImagePulls in kubelet
configuration. With maxParallelImagePulls set to n, only n images can be pulled at the same time, and any image pull beyond n will
have to wait until at least one ongoing image pull is complete.

Limiting the number parallel image pulls would prevent image pulling from consuming too much network bandwidth or disk I/O,
when parallel image pulling is enabled.

You can set maxParallelImagePulls to a positive number that is greater than or equal to 1. If you set maxParallelImagePulls to be
greater than or equal to 2, you must set the serializeImagePulls to false. The kubelet will fail to start with invalid
maxParallelImagePulls settings.

https://kubernetes.io/docs/concepts/_print/ 70/609
7/10/24, 9:28 AM Concepts | Kubernetes

Multi-architecture images with image indexes


As well as providing binary images, a container registry can also serve a container image index. An image index can point to multiple
image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example:
pause , example/mycontainer , kube-apiserver ) and allow different systems to fetch the right binary image for the machine
architecture they are using.

Kubernetes itself typically names container images with a suffix -$(ARCH) . For backward compatibility, please generate the older
images with suffixes. The idea is to generate say pause image which has the manifest for all the arch(es) and say pause-amd64 which
is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.

Using a private registry


Private registries may require keys to read images from them.
Credentials can be provided in several ways:

Configuring Nodes to Authenticate to a Private Registry


all pods can read any configured private registries
requires node configuration by cluster administrator
Kubelet Credential Provider to dynamically fetch credentials for private registries
kubelet can be configured to use credential provider exec plugin for the respective private registry.
Pre-pulled Images
all pods can use any images cached on a node
requires root access to all nodes to set up
Specifying ImagePullSecrets on a Pod
only pods which provide own keys can access the private registry
Vendor-specific or local extensions
if you're using a custom node configuration, you (or your cloud provider) can implement your mechanism for
authenticating the node to the container registry.

These options are explained in more detail below.

Configuring nodes to authenticate to a private registry


Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your
solution's documentation for the most accurate information.

For an example of configuring a private container image registry, see the Pull an Image from a Private Registry task. That example
uses a private registry in Docker Hub.

Kubelet credential provider for authenticated image pulls

Note:
This approach is especially suitable when kubelet needs to fetch registry credentials dynamically. Most commonly used for
registries provided by cloud providers where auth tokens are short-lived.

You can configure the kubelet to invoke a plugin binary to dynamically fetch registry credentials for a container image. This is the
most robust and versatile way to fetch credentials for private registries, but also requires kubelet-level configuration to enable.

See Configure a kubelet image credential provider for more details.

Interpretation of config.json
The interpretation of config.json varies between the original Docker implementation and the Kubernetes interpretation. In Docker,
the auths keys can only specify root URLs, whereas Kubernetes allows glob URLs as well as prefix-matched paths. The only
limitation is that glob patterns ( * ) have to include the dot ( . ) for each subdomain. The amount of matched subdomains has to be
equal to the amount of glob patterns ( *. ), for example:

*.kubernetes.io will not match kubernetes.io , but abc.kubernetes.io

https://kubernetes.io/docs/concepts/_print/ 71/609
7/10/24, 9:28 AM Concepts | Kubernetes

*.*.kubernetes.io will not match abc.kubernetes.io , but abc.def.kubernetes.io


prefix.*.io will match prefix.kubernetes.io

*-good.kubernetes.io will match prefix-good.kubernetes.io

This means that a config.json like this is valid:

{
"auths": {
"my-registry.io/images": { "auth": "…" },
"*.my-registry.io/images": { "auth": "…" }
}
}

Image pull operations would now pass the credentials to the CRI container runtime for every valid pattern. For example the following
container image names would match successfully:

my-registry.io/images

my-registry.io/images/my-image

my-registry.io/images/another-image

sub.my-registry.io/images/my-image

But not:

a.sub.my-registry.io/images/my-image

a.b.sub.my-registry.io/images/my-image

The kubelet performs image pulls sequentially for every found credential. This means, that multiple entries in config.json for
different paths are possible, too:

{
"auths": {
"my-registry.io/images": {
"auth": "…"
},
"my-registry.io/images/subpath": {
"auth": "…"
}
}
}

If now a container specifies an image my-registry.io/images/subpath/my-image to be pulled, then the kubelet will try to download
them from both authentication sources if one of them fails.

Pre-pulled images

Note:
This approach is suitable if you can control node configuration. It will not work reliably if your cloud provider manages nodes
and replaces them automatically.

By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container
is set to IfNotPresent or Never , then a local image is used (preferentially or exclusively, respectively).

If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the
same pre-pulled images.

This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.

All pods will have read access to any pre-pulled images.

https://kubernetes.io/docs/concepts/_print/ 72/609
7/10/24, 9:28 AM Concepts | Kubernetes

Specifying imagePullSecrets on a Pod

Note:
This is the recommended approach to run containers based on images in private registries.

Kubernetes supports specifying container image registry keys on a Pod. imagePullSecrets must all be in the same namespace as the
Pod. The referenced Secrets must be of type kubernetes.io/dockercfg or kubernetes.io/dockerconfigjson .

Creating a Secret with a Docker config


You need to know the username, registry password and client email address for authenticating to the registry, as well as its
hostname. Run the following command, substituting the appropriate uppercase values:

kubectl create secret docker-registry <name> \


--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL

If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a
Kubernetes Secrets.
Create a Secret based on existing Docker credentials explains how to set this up.

This is particularly useful if you are using multiple private container registries, as kubectl create secret docker-registry creates a
Secret that only works with a single private registry.

Note:
Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.

Referring to an imagePullSecrets on a Pod


Now, you can create pods which reference that secret by adding an imagePullSecrets section to a Pod definition. Each item in the
imagePullSecrets array can only reference a Secret in the same namespace.

For example:

cat <<EOF > pod.yaml


apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
EOF

cat <<EOF >> ./kustomization.yaml


resources:
- pod.yaml
EOF

This needs to be done for each pod that is using a private registry.

However, setting of this field can be automated by setting the imagePullSecrets in a ServiceAccount resource.

https://kubernetes.io/docs/concepts/_print/ 73/609
7/10/24, 9:28 AM Concepts | Kubernetes

Check Add ImagePullSecrets to a Service Account for detailed instructions.

You can use this in conjunction with a per-node .docker/config.json . The credentials will be merged.

Use cases
There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions.

1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
Use public images from a public registry
No configuration required.
Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time
to pull images.
2. Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
Use a hosted private registry
Manual configuration may be required on the nodes that need to access to private registry
Or, run an internal private registry behind your firewall with open read access.
No Kubernetes configuration is required.
Use a hosted container image registry service that controls image access
It will work better with cluster autoscaling than manual node configuration.
Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets .
3. Cluster with proprietary images, a few of which require stricter access control.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
Move sensitive data into a "Secret" resource, instead of packaging it in an image.
4. A multi-tenant cluster where each tenant needs own private registry.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.
Run a private registry with authorization required.
Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
The tenant adds that secret to imagePullSecrets of each namespace.

If you need access to multiple registries, you can create one secret for each registry.

Legacy built-in kubelet credential provider


In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials. This gave it the ability to
dynamically fetch credentials for image registries.

There were three built-in implementations of the kubelet credential provider integration: ACR (Azure Container Registry), ECR (Elastic
Container Registry), and GCR (Google Container Registry).

For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you are using.
Kubernetes v1.26 through to v1.30 do not include the legacy mechanism, so you would need to either:

configure a kubelet image credential provider on each node


specify image pull credentials using imagePullSecrets and at least one Secret

What's next
Read the OCI Image Manifest Specification.
Learn about container image garbage collection.
Learn more about pulling an Image from a Private Registry.

https://kubernetes.io/docs/concepts/_print/ 74/609
7/10/24, 9:28 AM Concepts | Kubernetes

3.2 - Container Environment


This page describes the resources available to Containers in the Container environment.

Container environment
The Kubernetes Container environment provides several important resources to Containers:

A filesystem, which is a combination of an image and one or more volumes.


Information about the Container itself.
Information about other objects in the cluster.

Container information
The hostname of a Container is the name of the Pod in which the Container is running. It is available through the hostname command
or the gethostname function call in libc.

The Pod name and namespace are available as environment variables through the downward API.

User defined environment variables from the Pod definition are also available to the Container, as are any environment variables
specified statically in the container image.

Cluster information
A list of all services that were running when a Container was created is available to that Container as environment variables. This list
is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services.

For a service named foo that maps to a Container named bar, the following variables are defined:

FOO_SERVICE_HOST=<the host the service is running on>


FOO_SERVICE_PORT=<the port the service is running on>

Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.

What's next
Learn more about Container lifecycle hooks.
Get hands-on experience attaching handlers to Container lifecycle events.

https://kubernetes.io/docs/concepts/_print/ 75/609
7/10/24, 9:28 AM Concepts | Kubernetes

3.3 - Runtime Class


ⓘ FEATURE STATE: Kubernetes v1.20 [stable]

This page describes the RuntimeClass resource and runtime selection mechanism.

RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a
Pod's containers.

Motivation
You can set a different RuntimeClass between different Pods to provide a balance of performance versus security. For example, if
part of your workload deserves a high level of information security assurance, you might choose to schedule those Pods so that they
run in a container runtime that uses hardware virtualization. You'd then benefit from the extra isolation of the alternative runtime,
at the expense of some additional overhead.

You can also use RuntimeClass to run different Pods with the same container runtime but with different settings.

Setup
1. Configure the CRI implementation on nodes (runtime dependent)
2. Create the corresponding RuntimeClass resources

1. Configure the CRI implementation on nodes


The configurations available through RuntimeClass are Container Runtime Interface (CRI) implementation dependent. See the
corresponding documentation (below) for your CRI implementation for how to configure.

Note:
RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means that all nodes are
configured the same way with respect to container runtimes). To support heterogeneous node configurations, see Scheduling
below.

The configurations have a corresponding handler name, referenced by the RuntimeClass. The handler must be a valid DNS label
name.

2. Create the corresponding RuntimeClass resources


The configurations setup in step 1 should each have an associated handler name, which identifies the configuration. For each
handler, create a corresponding RuntimeClass object.

The RuntimeClass resource currently only has 2 significant fields: the RuntimeClass name ( metadata.name ) and the handler
( handler ). The object definition looks like this:

# RuntimeClass is defined in the node.k8s.io API group


apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
# The name the RuntimeClass will be referenced by.
# RuntimeClass is a non-namespaced resource.
name: myclass
# The name of the corresponding CRI configuration
handler: myconfiguration

The name of a RuntimeClass object must be a valid DNS subdomain name.

https://kubernetes.io/docs/concepts/_print/ 76/609
7/10/24, 9:28 AM Concepts | Kubernetes

Note:
It is recommended that RuntimeClass write operations (create/update/patch/delete) be restricted to the cluster administrator.
This is typically the default. See Authorization Overview for more details.

Usage
Once RuntimeClasses are configured for the cluster, you can specify a runtimeClassName in the Pod spec to use it. For example:

apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: myclass
# ...

This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI
cannot run the corresponding handler, the pod will enter the Failed terminal phase. Look for a corresponding event for an error
message.

If no runtimeClassName is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the
RuntimeClass feature is disabled.

CRI Configuration
For more details on setting up CRI runtimes, see CRI installation.

containerd
Runtime handlers are configured through containerd's configuration at /etc/containerd/config.toml . Valid handlers are configured
under the runtimes section:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]

See containerd's config documentation for more details:

CRI-O
Runtime handlers are configured through CRI-O's configuration at /etc/crio/crio.conf . Valid handlers are configured under the
crio.runtime table:

[crio.runtime.runtimes.${HANDLER_NAME}]
runtime_path = "${PATH_TO_BINARY}"

See CRI-O's config documentation for more details.

Scheduling
ⓘ FEATURE STATE: Kubernetes v1.16 [beta]

By specifying the scheduling field for a RuntimeClass, you can set constraints to ensure that Pods running with this RuntimeClass
are scheduled to nodes that support it. If scheduling is not set, this RuntimeClass is assumed to be supported by all nodes.

https://kubernetes.io/docs/concepts/_print/ 77/609
7/10/24, 9:28 AM Concepts | Kubernetes

To ensure pods land on nodes supporting a specific RuntimeClass, that set of nodes should have a common label which is then
selected by the runtimeclass.scheduling.nodeSelector field. The RuntimeClass's nodeSelector is merged with the pod's nodeSelector
in admission, effectively taking the intersection of the set of nodes selected by each. If there is a conflict, the pod will be rejected.

If the supported nodes are tainted to prevent other RuntimeClass pods from running on the node, you can add tolerations to the
RuntimeClass. As with the nodeSelector , the tolerations are merged with the pod's tolerations in admission, effectively taking the
union of the set of nodes tolerated by each.

To learn more about configuring the node selector and tolerations, see Assigning Pods to Nodes.

Pod Overhead

ⓘ FEATURE STATE: Kubernetes v1.24 [stable]

You can specify overhead resources that are associated with running a Pod. Declaring overhead allows the cluster (including the
scheduler) to account for it when making decisions about Pods and resources.

Pod overhead is defined in RuntimeClass through the overhead field. Through the use of this field, you can specify the overhead of
running pods utilizing this RuntimeClass and ensure these overheads are accounted for in Kubernetes.

What's next
RuntimeClass Design
RuntimeClass Scheduling Design
Read about the Pod Overhead concept
PodOverhead Feature Design

https://kubernetes.io/docs/concepts/_print/ 78/609
7/10/24, 9:28 AM Concepts | Kubernetes

3.4 - Container Lifecycle Hooks


This page describes how kubelet managed Containers can use the Container lifecycle hook framework to run code triggered by
events during their management lifecycle.

Overview
Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular, Kubernetes provides
Containers with lifecycle hooks. The hooks enable Containers to be aware of events in their management lifecycle and run code
implemented in a handler when the corresponding lifecycle hook is executed.

Container hooks
There are two hooks that are exposed to Containers:

PostStart

This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the
container ENTRYPOINT. No parameters are passed to the handler.

PreStop

This hook is called immediately before a container is terminated due to an API request or management event such as a
liveness/startup probe failure, preemption, resource contention and others. A call to the PreStop hook fails if the container is
already in a terminated or completed state and the hook must complete before the TERM signal to stop the container can be sent.
The Pod's termination grace period countdown begins before the PreStop hook is executed, so regardless of the outcome of the
handler, the container will eventually terminate within the Pod's termination grace period. No parameters are passed to the handler.

A more detailed description of the termination behavior can be found in Termination of Pods.

Hook handler implementations


Containers can access a hook by implementing and registering a handler for that hook. There are three types of hook handlers that
can be implemented for Containers:

Exec - Executes a specific command, such as pre-stop.sh , inside the cgroups and namespaces of the Container. Resources
consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.
Sleep - Pauses the container for a specified duration. This is a beta-level feature default enabled by the
PodLifecycleSleepAction feature gate.

Hook handler execution


When a Container lifecycle management hook is called, the Kubernetes management system executes the handler according to the
hook action, httpGet , tcpSocket and sleep are executed by the kubelet process, and exec is executed in the container.

Hook handler calls are synchronous within the context of the Pod containing the Container. This means that for a PostStart hook,
the Container ENTRYPOINT and hook fire asynchronously. However, if the hook takes too long to run or hangs, the Container cannot
reach a running state.

PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before
the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until
the Pod is killed after its terminationGracePeriodSeconds expires. This grace period applies to the total time it takes for both the
PreStop hook to execute and for the Container to stop normally. If, for example, terminationGracePeriodSeconds is 60, and the hook
takes 55 seconds to complete, and the Container takes 10 seconds to stop normally after receiving the signal, then the Container will
be killed before it can stop normally, since terminationGracePeriodSeconds is less than the total time (55+10) it takes for these two
things to happen.

If either a PostStart or PreStop hook fails, it kills the Container.

Users should make their hook handlers as lightweight as possible. There are cases, however, when long running commands make
sense, such as when saving state prior to stopping a Container.
https://kubernetes.io/docs/concepts/_print/ 79/609
7/10/24, 9:28 AM Concepts | Kubernetes

Hook delivery guarantees


Hook delivery is intended to be at least once, which means that a hook may be called multiple times for any given event, such as for
PostStart or PreStop . It is up to the hook implementation to handle this correctly.

Generally, only single deliveries are made. If, for example, an HTTP hook receiver is down and is unable to take traffic, there is no
attempt to resend. In some rare cases, however, double delivery may occur. For instance, if a kubelet restarts in the middle of
sending a hook, the hook might be resent after the kubelet comes back up.

Debugging Hook handlers


The logs for a Hook handler are not exposed in Pod events. If a handler fails for some reason, it broadcasts an event. For PostStart ,
this is the FailedPostStartHook event, and for PreStop , this is the FailedPreStopHook event. To generate a failed FailedPostStartHook
event yourself, modify the lifecycle-events.yaml file to change the postStart command to "badcommand" and apply it. Here is some
example output of the resulting events you see from running kubectl describe pod lifecycle-demo :

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX
Normal Pulled 6s kubelet Successfully pulled image "nginx" in 229.604315ms
Normal Pulling 4s (x2 over 6s) kubelet Pulling image "nginx"
Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container
Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container
Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container "lifecycl
Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook
Normal Pulled 4s kubelet Successfully pulled image "nginx" in 215.66395ms
Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container

What's next
Learn more about the Container environment.
Get hands-on experience attaching handlers to Container lifecycle events.

https://kubernetes.io/docs/concepts/_print/ 80/609
7/10/24, 9:28 AM Concepts | Kubernetes

4 - Workloads
Understand Pods, the smallest deployable compute object in Kubernetes, and the higher-level abstractions
that help you to run them.

A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on
Kubernetes you run it inside a set of pods. In Kubernetes, a Pod represents a set of running containers on your cluster.

Kubernetes pods have a defined lifecycle. For example, once a pod is running in your cluster then a critical fault on the node where
that pod is running means that all the pods on that node fail. Kubernetes treats that level of failure as final: you would need to
create a new Pod to recover, even if the node later becomes healthy.

However, to make life considerably easier, you don't need to manage each Pod directly. Instead, you can use workload resources that
manage a set of pods on your behalf. These resources configure controllers that make sure the right number of the right kind of pod
are running, to match the state you specified.

Kubernetes provides several built-in workload resources:

Deployment and ReplicaSet (replacing the legacy resource ReplicationController). Deployment is a good fit for managing a
stateless application workload on your cluster, where any Pod in the Deployment is interchangeable and can be replaced if
needed.
StatefulSet lets you run one or more related Pods that do track state somehow. For example, if your workload records data
persistently, you can run a StatefulSet that matches each Pod with a PersistentVolume. Your code, running in the Pods for that
StatefulSet, can replicate data to other Pods in the same StatefulSet to improve overall resilience.
DaemonSet defines Pods that provide facilities that are local to nodes. Every time you add a node to your cluster that matches
the specification in a DaemonSet, the control plane schedules a Pod for that DaemonSet onto the new node. Each pod in a
DaemonSet performs a job similar to a system daemon on a classic Unix / POSIX server. A DaemonSet might be fundamental to
the operation of your cluster, such as a plugin to run cluster networking, it might help you to manage the node, or it could
provide optional behavior that enhances the container platform you are running.
Job and CronJob provide different ways to define tasks that run to completion and then stop. You can use a Job to define a task
that runs to completion, just once. You can use a CronJob to run the same Job multiple times according a schedule.

In the wider Kubernetes ecosystem, you can find third-party workload resources that provide additional behaviors. Using a custom
resource definition, you can add in a third-party workload resource if you want a specific behavior that's not part of Kubernetes'
core. For example, if you wanted to run a group of Pods for your application but stop work unless all the Pods are available (perhaps
for some high-throughput distributed task), then you can implement or install an extension that does provide that feature.

What's next
As well as reading about each API kind for workload management, you can read how to do specific tasks:

Run a stateless application using a Deployment


Run a stateful application either as a single instance or as a replicated set
Run automated tasks with a CronJob

To learn about Kubernetes' mechanisms for separating code from configuration, visit Configuration.

There are two supporting concepts that provide backgrounds about how Kubernetes manages pods for applications:

Garbage collection tidies up objects from your cluster after their owning resource has been removed.
The time-to-live after finished controller removes Jobs once a defined time has passed since they completed.

Once your application is running, you might want to make it available on the internet as a Service or, for web application only, using
an Ingress.

https://kubernetes.io/docs/concepts/_print/ 81/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1 - Pods
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a
specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A
Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled.
In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed
on the same logical host.

As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject
ephemeral containers for debugging a running Pod.

What is a Pod?
Note:
You need to install a container runtime into each node in the cluster so that Pods can run there.

The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation - the same things that
isolate a container. Within a Pod's context, the individual applications may have further sub-isolations applied.

A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.

Pods in a Kubernetes cluster are used in two main ways:

Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case,
you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers
directly.

Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple
co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive
unit.

Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are tightly coupled.

You don't need to run multiple containers to provide replication (for resilience or capacity); if you need multiple replicas, see
Workload management.

Using Pods
The following is an example of a Pod which consists of a container running the image nginx:1.14.2 .

pods/simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

To create the Pod shown above, run the following command:


https://kubernetes.io/docs/concepts/_print/ 82/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml

Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on
how Pods are used with workload resources.

Workload resources for managing pods


Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as
Deployment or Job. If your Pods need to track state, consider the StatefulSet resource.

Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more
overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically
referred to as replication. Replicated Pods are usually created and managed as a group by a workload resource and its controller.

See Pods and controllers for more information on how Kubernetes uses workload resources, and their controllers, to implement
application scaling and auto-healing.

Pods natively provide two kinds of shared resources for their constituent containers: networking and storage.

Working with Pods


You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively
ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller), the new Pod is scheduled to
run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is
evicted for lack of resources, or the node fails.

Note:
Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for
running container(s). A Pod persists until it is deleted.

The name of a Pod must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostname. For best
compatibility, the name should follow the more restrictive rules for a DNS label.

Pod OS

ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

You should set the .spec.os.name field to either windows or linux to indicate the OS on which you want the pod to run. These two
are the only operating systems supported for now by Kubernetes. In the future, this list may be expanded.

In Kubernetes v1.30, the value of .spec.os.name does not affect how the kube-scheduler picks a Pod to run a node. In any cluster
where there is more than one operating system for running nodes, you should set the kubernetes.io/os label correctly on each node,
and define pods with a nodeSelector based on the operating system label, the kube-scheduler assigns your pod to a node based on
other criteria and may or may not succeed in picking a suitable node placement where the node OS is right for the containers in that
Pod. The Pod security standards also use this field to avoid enforcing policies that aren't relevant to the operating system.

Pods and controllers


You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and
rollout and automatic healing in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have
stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node.

Here are some examples of workload resources that manage one or more Pods:

Deployment
StatefulSet
DaemonSet

https://kubernetes.io/docs/concepts/_print/ 83/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pod templates
Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf.

PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments, Jobs, and
DaemonSets.

Each controller for a workload resource uses the PodTemplate inside the workload object to make actual Pods. The PodTemplate is
part of the desired state of whatever workload resource you used to run your app.

When you create a Pod, you can include environment variables in the Pod template for the containers that run in the Pod.

The sample below is a manifest for a simple Job with a template that starts one container. The container in that Pod prints a
message then pauses.

apiVersion: batch/v1
kind: Job
metadata:
name: hello
spec:
template:
# This is the pod template
spec:
containers:
- name: hello
image: busybox:1.28
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
# The pod template ends here

Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the
pod template for a workload resource, that resource needs to create replacement Pods that use the updated template.

For example, the StatefulSet controller ensures that the running Pods match the current pod template for each StatefulSet object. If
you edit the StatefulSet to change its pod template, the StatefulSet starts to create new Pods based on the updated template.
Eventually, all of the old Pods are replaced with new Pods, and the update is complete.

Each workload resource implements its own rules for handling changes to the Pod template. If you want to read more about
StatefulSet specifically, read Update strategy in the StatefulSet Basics tutorial.

On Nodes, the kubelet does not directly observe or manage any of the details around pod templates and updates; those details are
abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the
cluster's behavior without changing existing code.

Pod update and replacement


As mentioned in the previous section, when the Pod template for a workload resource is changed, the controller creates new Pods
based on the updated template instead of updating or patching the existing Pods.

Kubernetes doesn't prevent you from managing Pods directly. It is possible to update some fields of a running Pod, in place.
However, Pod update operations like patch , and replace have some limitations:

Most of the metadata about a Pod is immutable. For example, you cannot change the namespace , name , uid , or
creationTimestamp fields; the generation field is unique. It only accepts updates that increment the field's current value.

If the metadata.deletionTimestamp is set, no new entry can be added to the metadata.finalizers list.

Pod updates may not change fields other than spec.containers[*].image , spec.initContainers[*].image ,
spec.activeDeadlineSeconds or spec.tolerations . For spec.tolerations , you can only add new entries.

When updating the spec.activeDeadlineSeconds field, two types of updates are allowed:

1. setting the unassigned field to a positive number;


2. updating the field from a positive number to a smaller, non-negative number.
https://kubernetes.io/docs/concepts/_print/ 84/609
7/10/24, 9:28 AM Concepts | Kubernetes

Resource sharing and communication


Pods enable data sharing and communication among their constituent containers.

Storage in Pods
A Pod can specify a set of shared storage volumes. All containers in the Pod can access the shared volumes, allowing those
containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be
restarted. See Storage for more information on how Kubernetes implements shared storage and makes it available to Pods.

Pod networking
Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including
the IP address and network ports. Inside a Pod (and only then), the containers that belong to the Pod can communicate with one
another using localhost . When containers in a Pod communicate with entities outside the Pod, they must coordinate how they use
the shared network resources (such as ports). Within a Pod, containers share an IP address and port space, and can find each other
via localhost . The containers in a Pod can also communicate with each other using standard inter-process communications like
SystemV semaphores or POSIX shared memory. Containers in different Pods have distinct IP addresses and can not communicate by
OS-level IPC without special configuration. Containers that want to interact with a container running in a different Pod can use IP
networking to communicate.

Containers within the Pod see the system hostname as being the same as the configured name for the Pod. There's more about this
in the networking section.

Pod security settings


To set security constraints on Pods and containers, you use the securityContext field in the Pod specification. This field gives you
granular control over what a Pod or individual containers can do. For example:

Drop specific Linux capabilities to avoid the impact of a CVE.


Force all processes in the Pod to run as a non-root user or as a specific user or group ID.
Set a specific seccomp profile.
Set Windows security options, such as whether containers run as HostProcess.

Caution:
You can also use the Pod securityContext to enable privileged mode in Linux containers. Privileged mode overrides many of the
other security settings in the securityContext. Avoid using this setting unless you can't grant the equivalent permissions by using
other fields in the securityContext. In Kubernetes 1.26 and later, you can run Windows containers in a similarly privileged mode
by setting the windowsOptions.hostProcess flag on the security context of the Pod spec. For details and instructions, see Create a
Windows HostProcess Pod.

To learn about kernel-level security constraints that you can use, see Linux kernel security constraints for Pods and containers.
To learn more about the Pod security context, see Configure a Security Context for a Pod or Container.

Static Pods
Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Whereas most
Pods are managed by the control plane (for example, a Deployment), for static Pods, the kubelet directly supervises each static Pod
(and restarts it if it fails).

Static Pods are always bound to one Kubelet on a specific node. The main use for static Pods is to run a self-hosted control plane: in
other words, using the kubelet to supervise the individual control plane components.

The kubelet automatically tries to create a mirror Pod on the Kubernetes API server for each static Pod. This means that the Pods
running on a node are visible on the API server, but cannot be controlled from there. See the guide Create static Pods for more
information.

https://kubernetes.io/docs/concepts/_print/ 85/609
7/10/24, 9:28 AM Concepts | Kubernetes

Note:
The spec of a static Pod cannot refer to other API objects (e.g., ServiceAccount, ConfigMap, Secret, etc).

Pods with multiple containers


Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a
Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share
resources and dependencies, communicate with one another, and coordinate when and how they are terminated.

Pods in a Kubernetes cluster are used in two main ways:

Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case,
you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers
directly.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple
co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive
unit of service—for example, one container serving data stored in a shared volume to the public, while a separate
sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral
network identity together as a single unit.

For example, you might have a container that acts as a web server for files in a shared volume, and a separate sidecar container that
updates those files from a remote source, as in the following diagram:

Some Pods have init containers as well as app containers. By default, init containers run and complete before the app containers are
started.

You can also have sidecar containers that provide auxiliary services to the main application Pod (for example: a service mesh).

ⓘ FEATURE STATE: Kubernetes v1.29 [beta]

Enabled by default, the SidecarContainers feature gate allows you to specify restartPolicy: Always for init containers. Setting the
Always restart policy ensures that the containers where you set it are treated as sidecars that are kept running during the entire
lifetime of the Pod. Containers that you explicitly define as sidecar containers start up before the main application Pod and remain
running until the Pod is shut down.

Container probes
A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke
different actions:

ExecAction (performed with the help of the container runtime)


TCPSocketAction (checked directly by the kubelet)

HTTPGetAction (checked directly by the kubelet)

You can read more about probes in the Pod Lifecycle documentation.

https://kubernetes.io/docs/concepts/_print/ 86/609
7/10/24, 9:28 AM Concepts | Kubernetes

What's next
Learn about the lifecycle of a Pod.
Learn about RuntimeClass and how you can use it to configure different Pods with different container runtime configurations.
Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.
Pod is a top-level resource in the Kubernetes REST API. The Pod object definition describes the object in detail.
The Distributed System Toolkit: Patterns for Composite Containers explains common layouts for Pods with more than one
container.
Read about Pod topology spread constraints

To understand the context for why Kubernetes wraps a common Pod API in other resources (such as StatefulSets or Deployments),
you can read about the prior art, including:

Aurora
Borg
Marathon
Omega
Tupperware.

https://kubernetes.io/docs/concepts/_print/ 87/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.1 - Pod Lifecycle


This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting in the Pending phase, moving through Running if
at least one of its primary containers starts OK, and then through either the Succeeded or Failed phases depending on whether any
container in the Pod terminated in failure.

Like individual application containers, Pods are considered to be relatively ephemeral (rather than durable) entities. Pods are
created, assigned a unique ID (UID), and scheduled to run on nodes where they remain until termination (according to restart policy)
or deletion. If a Node dies, the Pods running on (or scheduled to run on) that node are marked for deletion. The control plane marks
the Pods for removal after a timeout period.

Pod lifetime
Whilst a Pod is running, the kubelet is able to restart containers to handle some kind of faults. Within a Pod, Kubernetes tracks
different container states and determines what action to take to make the Pod healthy again.

In the Kubernetes API, Pods have both a specification and an actual status. The status for a Pod object consists of a set of Pod
conditions. You can also inject custom readiness information into the condition data for a Pod, if that is useful to your application.

Pods are only scheduled once in their lifetime; assigning a Pod to a specific node is called binding, and the process of selecting which
node to use is called scheduling. Once a Pod has been scheduled and is bound to a node, Kubernetes tries to run that Pod on the
node. The Pod runs on that node until it stops, or until the Pod is terminated; if Kubernetes isn't able start the Pod on the selected
node (for example, if the node crashes before the Pod starts), then that particular Pod never starts.

You can use Pod Scheduling Readiness to delay scheduling for a Pod until all its scheduling gates are removed. For example, you
might want to define a set of Pods but only trigger scheduling once all the Pods have been created.

Pods and fault recovery


If one of the containers in the Pod fails, then Kubernetes may try to restart that specific container. Read How Pods handle problems
with containers to learn more.

Pods can however fail in a way that the cluster cannot recover from, and in that case Kubernetes does not attempt to heal the Pod
further; instead, Kubernetes deletes the Pod and relies on other components to provide automatic healing.

If a Pod is scheduled to a node and that node then fails, the Pod is treated as unhealthy and Kubernetes eventually deletes the Pod.
A Pod won't survive an eviction due to a lack of resources or Node maintenance.

Kubernetes uses a higher-level abstraction, called a controller, that handles the work of managing the relatively disposable Pod
instances.

A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead, that Pod can be replaced by a new, near-
identical Pod. If you make a replacement Pod, it can even have same name (as in .metadata.name ) that the old Pod had, but the
replacement would have a different .metadata.uid from the old Pod.

Kubernetes does not guarantee that a replacement for an existing Pod would be scheduled to the same node as the old Pod that
was being replaced.

Associated lifetimes
When something is said to have the same lifetime as a Pod, such as a volume, that means that the thing exists as long as that
specific Pod (with that exact UID) exists. If that Pod is deleted for any reason, and even if an identical replacement is created, the
related thing (a volume, in this example) is also destroyed and created anew.

https://kubernetes.io/docs/concepts/_print/ 88/609
7/10/24, 9:28 AM Concepts | Kubernetes

Figure 1.
A multi-container Pod that contains a file puller sidecar and a web server. The Pod uses an ephemeral emptyDir volume for shared
storage between the containers.

Pod phase
A Pod's status field is a PodStatus object, which has a phase field.

The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The phase is not intended to be a
comprehensive rollup of observations of container or Pod state, nor is it intended to be a comprehensive state machine.

The number and meanings of Pod phase values are tightly guarded. Other than what is documented here, nothing should be
assumed about Pods that have a given phase value.

Here are the possible values for phase :

Value Description

Pending The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and
made ready to run. This includes time a Pod spends waiting to be scheduled as well as the time spent
downloading container images over the network.

Running The Pod has been bound to a node, and all of the containers have been created. At least one container is still
running, or is in the process of starting or restarting.

Succeeded All containers in the Pod have terminated in success, and will not be restarted.

Failed All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the
container either exited with non-zero status or was terminated by the system, and is not set for automatic
restarting.

Unknown For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in
communicating with the node where the Pod should be running.

Note:
When a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the
Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to
terminate a Pod by force.

Since Kubernetes 1.27, the kubelet transitions deleted Pods, except for static Pods and force-deleted Pods without a finalizer, to a
terminal phase ( Failed or Succeeded depending on the exit statuses of the pod containers) before their deletion from the API
server.

If a node dies or is disconnected from the rest of the cluster, Kubernetes applies a policy for setting the phase of all Pods on the lost
node to Failed.

https://kubernetes.io/docs/concepts/_print/ 89/609
7/10/24, 9:28 AM Concepts | Kubernetes

Container states
As well as the phase of the Pod overall, Kubernetes tracks the state of each container inside a Pod. You can use container lifecycle
hooks to trigger events to run at certain points in a container's lifecycle.

Once the scheduler assigns a Pod to a Node, the kubelet starts creating containers for that Pod using a container runtime. There are
three possible container states: Waiting , Running , and Terminated .

To check the state of a Pod's containers, you can use kubectl describe pod <name-of-pod> . The output shows the state for each
container within that Pod.

Each state has a specific meaning:

Waiting
If a container is not in either the Running or Terminated state, it is Waiting . A container in the Waiting state is still running the
operations it requires in order to complete start up: for example, pulling the container image from a container image registry, or
applying Secret data. When you use kubectl to query a Pod with a container that is Waiting , you also see a Reason field to
summarize why the container is in that state.

Running
The Running status indicates that a container is executing without issues. If there was a postStart hook configured, it has already
executed and finished. When you use kubectl to query a Pod with a container that is Running , you also see information about when
the container entered the Running state.

Terminated
A container in the Terminated state began execution and then either ran to completion or failed for some reason. When you use
kubectl to query a Pod with a container that is Terminated , you see a reason, an exit code, and the start and finish time for that
container's period of execution.

If a container has a preStop hook configured, this hook runs before the container enters the Terminated state.

How Pods handle problems with containers


Kubernetes manages container failures within Pods using a restartPolicy defined in the Pod spec . This policy determines how
Kubernetes reacts to containers exiting due to errors or other reasons, which falls in the following sequence:

1. Initial crash: Kubernetes attempts an immediate restart based on the Pod restartPolicy .
2. Repeated crashes: After the initial crash Kubernetes applies an exponential backoff delay for subsequent restarts, described
in restartPolicy. This prevents rapid, repeated restart attempts from overloading the system.
3. CrashLoopBackOff state: This indicates that the backoff delay mechanism is currently in effect for a given container that is in
a crash loop, failing and restarting repeatedly.
4. Backoff reset: If a container runs successfully for a certain duration (e.g., 10 minutes), Kubernetes resets the backoff delay,
treating any new crash as the first one.

In practice, a CrashLoopBackOff is a condition or event that might be seen as output from the kubectl command, while describing or
listing Pods, when a container in the Pod fails to start properly and then continually tries and fails in a loop.

In other words, when a container enters the crash loop, Kubernetes applies the exponential backoff delay mentioned in the
Container restart policy. This mechanism prevents a faulty container from overwhelming the system with continuous failed start
attempts.

The CrashLoopBackOff can be caused by issues like the following:

Application errors that cause the container to exit.


Configuration errors, such as incorrect environment variables or missing configuration files.
Resource constraints, where the container might not have enough memory or CPU to start properly.
Health checks failing if the application doesn't start serving within the expected time.
Container liveness probes or startup probes returning a Failure result as mentioned in the probes section.

https://kubernetes.io/docs/concepts/_print/ 90/609
7/10/24, 9:28 AM Concepts | Kubernetes

To investigate the root cause of a CrashLoopBackOff issue, a user can:

1. Check logs: Use kubectl logs <name-of-pod> to check the logs of the container. This is often the most direct way to diagnose
the issue causing the crashes.
2. Inspect events: Use kubectl describe pod <name-of-pod> to see events for the Pod, which can provide hints about
configuration or resource issues.
3. Review configuration: Ensure that the Pod configuration, including environment variables and mounted volumes, is correct
and that all required external resources are available.
4. Check resource limits: Make sure that the container has enough CPU and memory allocated. Sometimes, increasing the
resources in the Pod definition can resolve the issue.
5. Debug application: There might exist bugs or misconfigurations in the application code. Running this container image locally
or in a development environment can help diagnose application specific issues.

Container restart policy


The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.

The restartPolicy for a Pod applies to app containers in the Pod and to regular init containers. Sidecar containers ignore the Pod-
level restartPolicy field: in Kubernetes, a sidecar is defined as an entry inside initContainers that has its container-level
restartPolicy set to Always . For init containers that exit with an error, the kubelet restarts the init container if the Pod level
restartPolicy is either OnFailure or Always :

Always : Automatically restarts the container after any termination.


OnFailure : Only restarts the container if it exits with an error (non-zero exit status).

Never : Does not automatically restart the terminated container.

When the kubelet is handling container restarts according to the configured restart policy, that only applies to restarts that make
replacement containers inside the same Pod and running on the same node. After containers in a Pod exit, the kubelet restarts them
with an exponential backoff delay (10s, 20s, 40s, …), that is capped at 300 seconds (5 minutes). Once a container has executed for 10
minutes without any problems, the kubelet resets the restart backoff timer for that container. Sidecar containers and Pod lifecycle
explains the behaviour of init containers when specify restartpolicy field on it.

Pod conditions
A Pod has a PodStatus, which has an array of PodConditions through which the Pod has or has not passed. Kubelet manages the
following PodConditions:

PodScheduled: the Pod has been scheduled to a node.


PodReadyToStartContainers : (beta feature; enabled by default) the Pod sandbox has been successfully created and networking
configured.
ContainersReady : all containers in the Pod are ready.

Initialized : all init containers have completed successfully.

Ready : the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.

Field name Description

type Name of this Pod condition.

status Indicates whether that condition is applicable, with possible values " True ", " False ", or " Unknown ".

lastProbeTime Timestamp of when the Pod condition was last probed.

lastTransitionTime Timestamp for when the Pod last transitioned from one status to another.

reason Machine-readable, UpperCamelCase text indicating the reason for the condition's last transition.

message Human-readable message indicating details about the last status transition.

https://kubernetes.io/docs/concepts/_print/ 91/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pod readiness

ⓘ FEATURE STATE: Kubernetes v1.14 [stable]

Your application can inject extra feedback or signals into PodStatus: Pod readiness. To use this, set readinessGates in the Pod's spec
to specify a list of additional conditions that the kubelet evaluates for Pod readiness.

Readiness gates are determined by the current state of status.condition fields for the Pod. If Kubernetes cannot find such a
condition in the status.conditions field of a Pod, the status of the condition is defaulted to " False ".

Here is an example:

kind: Pod
...
spec:
readinessGates:
- conditionType: "www.example.com/feature-1"
status:
conditions:
- type: Ready # a built in PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
- type: "www.example.com/feature-1" # an extra PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
containerStatuses:
- containerID: docker://abcd...
ready: true
...

The Pod conditions you add must have names that meet the Kubernetes label key format.

Status for Pod readiness


The kubectl patch command does not support patching object status. To set these status.conditions for the Pod, applications and
operators should use the PATCH action. You can use a Kubernetes client library to write code that sets custom Pod conditions for
Pod readiness.

For a Pod that uses custom conditions, that Pod is evaluated to be ready only when both the following statements apply:

All containers in the Pod are ready.


All conditions specified in readinessGates are True .

When a Pod's containers are Ready but at least one custom condition is missing or False , the kubelet sets the Pod's condition to
ContainersReady .

Pod network readiness

ⓘ FEATURE STATE: Kubernetes v1.29 [beta]

Note:
During its early development, this condition was named PodHasNetwork.

After a Pod gets scheduled on a node, it needs to be admitted by the kubelet and to have any required storage volumes mounted.
Once these phases are complete, the kubelet works with a container runtime (using Container runtime interface (CRI)) to set up a
runtime sandbox and configure networking for the Pod. If the PodReadyToStartContainersCondition feature gate is enabled (it is

https://kubernetes.io/docs/concepts/_print/ 92/609
7/10/24, 9:28 AM Concepts | Kubernetes

enabled by default for Kubernetes 1.30), the PodReadyToStartContainers condition will be added to the status.conditions field of a
Pod.

The PodReadyToStartContainers condition is set to False by the Kubelet when it detects a Pod does not have a runtime sandbox with
networking configured. This occurs in the following scenarios:

Early in the lifecycle of the Pod, when the kubelet has not yet begun to set up a sandbox for the Pod using the container
runtime.
Later in the lifecycle of the Pod, when the Pod sandbox has been destroyed due to either:
the node rebooting, without the Pod getting evicted
for container runtimes that use virtual machines for isolation, the Pod sandbox virtual machine rebooting, which then
requires creating a new sandbox and fresh container network configuration.

The PodReadyToStartContainers condition is set to True by the kubelet after the successful completion of sandbox creation and
network configuration for the Pod by the runtime plugin. The kubelet can start pulling container images and create containers after
PodReadyToStartContainers condition has been set to True .

For a Pod with init containers, the kubelet sets the Initialized condition to True after the init containers have successfully
completed (which happens after successful sandbox creation and network configuration by the runtime plugin). For a Pod without
init containers, the kubelet sets the Initialized condition to True before sandbox creation and network configuration starts.

Container probes
A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes
code within the container, or makes a network request.

Check mechanisms
There are four different ways to check a container using a probe. Each probe must define exactly one of these four mechanisms:

exec

Executes a specified command inside the container. The diagnostic is considered successful if the command exits with a status
code of 0.

grpc

Performs a remote procedure call using gRPC. The target should implement gRPC health checks. The diagnostic is considered
successful if the status of the response is SERVING.

httpGet

Performs an HTTP GET request against the Pod's IP address on a specified port and path. The diagnostic is considered successful if
the response has a status code greater than or equal to 200 and less than 400.

tcpSocket

Performs a TCP check against the Pod's IP address on a specified port. The diagnostic is considered successful if the port is open.
If the remote system (the container) closes the connection immediately after it opens, this counts as healthy.

Caution:
Unlike the other mechanisms, exec probe's implementation involves the creation/forking of multiple processes each time when
executed. As a result, in case of the clusters having higher pod densities, lower intervals of initialDelaySeconds, periodSeconds,
configuring any probe with exec mechanism might introduce an overhead on the cpu usage of the node. In such scenarios,
consider using the alternative probe mechanisms to avoid the overhead.

Probe outcome
Each probe has one of three results:

Success

The container passed the diagnostic.

https://kubernetes.io/docs/concepts/_print/ 93/609
7/10/24, 9:28 AM Concepts | Kubernetes

Failure

The container failed the diagnostic.

Unknown

The diagnostic failed (no action should be taken, and the kubelet will make further checks).

Types of probe
The kubelet can optionally perform and react to three kinds of probes on running containers:

livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is
subjected to its restart policy. If a container does not provide a liveness probe, the default state is Success.

readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes
the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay
is Failure. If a container does not provide a readiness probe, the default state is Success.

startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until
it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a
container does not provide a startup probe, the default state is Success.

For more information about how to set up a liveness, readiness, or startup probe, see Configure Liveness, Readiness and Startup
Probes.

When should you use a liveness probe?


If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not
necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's
restartPolicy .

If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of
Always or OnFailure.

When should you use a readiness probe?


If you'd like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe
might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without
receiving any traffic and only start receiving traffic after the probe starts succeeding.

If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint
specific to readiness that is different from the liveness probe.

If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness
probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is
available. This helps you avoid directing traffic to Pods that can only respond with error messages.

If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe.
However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you
might prefer a readiness probe.

Note:
If you want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion,
the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the
unready state while it waits for the containers in the Pod to stop.

When should you use a startup probe?

https://kubernetes.io/docs/concepts/_print/ 94/609
7/10/24, 9:28 AM Concepts | Kubernetes

Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness
interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness
interval would allow.

If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds , you should specify a startup
probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its
failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps
to protect against deadlocks.

Termination of Pods
Because Pods represent processes running on nodes in the cluster, it is important to allow those processes to gracefully terminate
when they are no longer needed (rather than being abruptly stopped with a KILL signal and having no chance to clean up).

The design aim is for you to be able to request deletion and know when processes terminate, but also be able to ensure that deletes
eventually complete. When you request deletion of a Pod, the cluster records and tracks the intended grace period before the Pod is
allowed to be forcefully killed. With that forceful shutdown tracking in place, the kubelet attempts graceful shutdown.

Typically, with this graceful termination of the pod, kubelet makes requests to the container runtime to attempt to stop the
containers in the pod by first sending a TERM (aka. SIGTERM) signal, with a grace period timeout, to the main process in each
container. The requests to stop the containers are processed by the container runtime asynchronously. There is no guarantee to the
order of processing for these requests. Many container runtimes respect the STOPSIGNAL value defined in the container image and, if
different, send the container image configured STOPSIGNAL instead of TERM. Once the grace period has expired, the KILL signal is
sent to any remaining processes, and the Pod is then deleted from the API Server. If the kubelet or the container runtime's
management service is restarted while waiting for processes to terminate, the cluster retries from the start including the full original
grace period.

Pod termination flow, illustrated with an example:

1. You use the kubectl tool to manually delete a specific Pod, with the default grace period (30 seconds).

2. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period. If
you use kubectl describe to check the Pod you're deleting, that Pod shows up as "Terminating". On the node where the Pod is
running: as soon as the kubelet sees that a Pod has been marked as terminating (a graceful shutdown duration has been set),
the kubelet begins the local Pod shutdown process.

1. If one of the Pod's containers has defined a preStop hook and the terminationGracePeriodSeconds in the Pod spec is not
set to 0, the kubelet runs that hook inside of the container. The default terminationGracePeriodSeconds setting is 30
seconds.

If the preStop hook is still running after the grace period expires, the kubelet requests a small, one-off grace period
extension of 2 seconds.

Note:
If the preStop hook needs longer to complete than the default grace period allows, you must modify
terminationGracePeriodSeconds to suit this.

2. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each container.

There is special ordering if the Pod has any sidecar containers defined. Otherwise, the containers in the Pod receive the
TERM signal at different times and in an arbitrary order. If the order of shutdowns matters, consider using a preStop
hook to synchronize (or switch to using sidecar containers).

3. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane evaluates whether to remove that
shutting-down Pod from EndpointSlice (and Endpoints) objects, where those objects represent a Service with a configured
selector. ReplicaSets and other workload resources no longer treat the shutting-down Pod as a valid, in-service replica.

Pods that shut down slowly should not continue to serve regular traffic and should start terminating and finish processing
open connections. Some applications need to go beyond finishing open connections and need more graceful termination, for
example, session draining and completion.

Any endpoints that represent the terminating Pods are not immediately removed from EndpointSlices, and a status indicating
terminating state is exposed from the EndpointSlice API (and the legacy Endpoints API). Terminating endpoints always have
their ready status as false (for backward compatibility with versions before 1.26), so load balancers will not use it for regular
https://kubernetes.io/docs/concepts/_print/ 95/609
7/10/24, 9:28 AM Concepts | Kubernetes

traffic.

If traffic draining on terminating Pod is needed, the actual readiness can be checked as a condition serving . You can find more
details on how to implement connections draining in the tutorial Pods And Endpoints Termination Flow
4. The kubelet ensures the Pod is shut down and terminated

1. When the grace period expires, if there is still any container running in the Pod, the kubelet triggers forcible shutdown.
The container runtime sends SIGKILL to any processes still running in any container in the Pod. The kubelet also cleans up
a hidden pause container if that container runtime uses one.
2. The kubelet transitions the Pod into a terminal phase (Failed or Succeeded depending on the end state of its containers).
3. The kubelet triggers forcible removal of the Pod object from the API server, by setting grace period to 0 (immediate
deletion).
4. The API server deletes the Pod's API object, which is then no longer visible from any client.

Forced Pod termination

Caution:
Forced deletions can be potentially disruptive for some workloads and their Pods.

By default, all deletes are graceful within 30 seconds. The kubectl delete command supports the --grace-period=<seconds> option
which allows you to override the default and specify your own value.

Setting the grace period to 0 forcibly and immediately deletes the Pod from the API server. If the Pod was still running on a node,
that forcible deletion triggers the kubelet to begin immediate cleanup.

Using kubectl, You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.

When a force deletion is performed, the API server does not wait for confirmation from the kubelet that the Pod has been
terminated on the node it was running on. It removes the Pod in the API immediately so a new Pod can be created with the same
name. On the node, Pods that are set to terminate immediately will still be given a small grace period before being force killed.

Caution:
Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue
to run on the cluster indefinitely.

If you need to force-delete Pods that are part of a StatefulSet, refer to the task documentation for deleting Pods from a StatefulSet.

Pod shutdown and sidecar containers


If your Pod includes one or more sidecar containers (init containers with an Always restart policy), the kubelet will delay sending the
TERM signal to these sidecar containers until the last main container has fully terminated. The sidecar containers will be terminated
in the reverse order they are defined in the Pod spec. This ensures that sidecar containers continue serving the other containers in
the Pod until they are no longer needed.

This means that slow termination of a main container will also delay the termination of the sidecar containers. If the grace period
expires before the termination process is complete, the Pod may enter forced termination. In this case, all remaining containers in
the Pod will be terminated simultaneously with a short grace period.

Similarly, if the Pod has a preStop hook that exceeds the termination grace period, emergency termination may occur. In general, if
you have used preStop hooks to control the termination order without sidecar containers, you can now remove them and allow the
kubelet to manage sidecar termination automatically.

Garbage collection of Pods


For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them.

The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded
or Failed ), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-
controller-manager). This avoids a resource leak as Pods are created and terminated over time.

Additionally, PodGC cleans up any Pods which satisfy any of the following conditions:
https://kubernetes.io/docs/concepts/_print/ 96/609
7/10/24, 9:28 AM Concepts | Kubernetes

1. are orphan Pods - bound to a node which no longer exists,


2. are unscheduled terminating Pods,
3. are terminating Pods, bound to a non-ready node tainted with node.kubernetes.io/out-of-service, when the
NodeOutOfServiceVolumeDetach feature gate is enabled.

When the PodDisruptionConditions feature gate is enabled, along with cleaning up the Pods, PodGC will also mark them as failed if
they are in a non-terminal phase. Also, PodGC adds a Pod disruption condition when cleaning up an orphan Pod. See Pod disruption
conditions for more details.

What's next
Get hands-on experience attaching handlers to container lifecycle events.

Get hands-on experience configuring Liveness, Readiness and Startup Probes.

Learn more about container lifecycle hooks.

Learn more about sidecar containers.

For detailed information about Pod and container status in the API, see the API reference documentation covering status for
Pod.

https://kubernetes.io/docs/concepts/_print/ 97/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.2 - Init Containers


This page provides an overview of init containers: specialized containers that run before app containers in a Pod. Init containers can
contain utilities or setup scripts not present in an app image.

You can specify init containers in the Pod specification alongside the containers array (which describes app containers).

In Kubernetes, a sidecar container is a container that starts before the main application container and continues to run. This
document is about init containers: containers that run to completion during Pod initialization.

Understanding init containers


A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the
app containers are started.

Init containers are exactly like regular containers, except:

Init containers always run to completion.


Each init container must complete successfully before the next one starts.

If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. However, if the Pod has a
restartPolicy of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.

To specify an init container for a Pod, add the initContainers field into the Pod specification, as an array of container items (similar
to the app containers field and its contents). See Container in the API reference for more details.

The status of the init containers is returned in .status.initContainerStatuses field as an array of the container statuses (similar to
the .status.containerStatuses field).

Differences from regular containers


Init containers support all the fields and features of app containers, including resource limits, volumes, and security settings.
However, the resource requests and limits for an init container are handled differently, as documented in Resource sharing within
containers.

Regular init containers (in other words: excluding sidecar containers) do not support the lifecycle , livenessProbe , readinessProbe ,
or startupProbe fields. Init containers must run to completion before the Pod can be ready; sidecar containers continue running
during a Pod's lifetime, and do support some probes. See sidecar container for further details about sidecar containers.

If you specify multiple init containers for a Pod, kubelet runs each init container sequentially. Each init container must succeed
before the next can run. When all of the init containers have run to completion, kubelet initializes the application containers for the
Pod and runs them as usual.

Differences from sidecar containers


Init containers run and complete their tasks before the main application container starts. Unlike sidecar containers, init containers
are not continuously running alongside the main containers.

Init containers run to completion sequentially, and the main container does not start until all the init containers have successfully
completed.

init containers do not support lifecycle , livenessProbe , readinessProbe , or startupProbe whereas sidecar containers support all
these probes to control their lifecycle.

Init containers share the same resources (CPU, memory, network) with the main application containers but do not interact directly
with them. They can, however, use shared volumes for data exchange.

Using init containers


Because init containers have separate images from app containers, they have some advantages for start-up related code:

Init containers can contain utilities or custom code for setup that are not present in an app image. For example, there is no
need to make an image FROM another image just to use a tool like sed , awk , python , or dig during setup.
https://kubernetes.io/docs/concepts/_print/ 98/609
7/10/24, 9:28 AM Concepts | Kubernetes

The application image builder and deployer roles can work independently without the need to jointly build a single app image.
Init containers can run with a different view of the filesystem than app containers in the same Pod. Consequently, they can be
given access to Secrets that app containers cannot access.
Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay
app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can
start in parallel.
Init containers can securely run utilities or custom code that would otherwise make an app container image less secure. By
keeping unnecessary tools separate you can limit the attack surface of your app container image.

Examples
Here are some ideas for how to use init containers:

Wait for a Service to be created, using a shell one-line command like:

for i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1

Register this Pod with a remote server from the downward API with a command like:

curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'

Wait for some time before starting the app container with a command like

sleep 60

Clone a Git repository into a Volume

Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app
container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja.

Init containers in use


This example defines a simple Pod that has two init containers. The first waits for myservice , and the second waits for mydb . Once
both init containers complete, the Pod runs the app container from its spec section.

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app.kubernetes.io/name: MyApp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.loca

https://kubernetes.io/docs/concepts/_print/ 99/609
7/10/24, 9:28 AM Concepts | Kubernetes

You can start this Pod by running:

kubectl apply -f myapp.yaml

The output is similar to this:

pod/myapp-pod created

And check on its status with:

kubectl get -f myapp.yaml

The output is similar to this:

NAME READY STATUS RESTARTS AGE


myapp-pod 0/1 Init:0/2 0 6m

or for more details:

kubectl describe -f myapp.yaml

The output is similar to this:

Name: myapp-pod
Namespace: default
[...]
Labels: app.kubernetes.io/name=MyApp
Status: Pending
[...]
Init Containers:
init-myservice:
[...]
State: Running
[...]
init-mydb:
[...]
State: Waiting
Reason: PodInitializing
Ready: False
[...]
Containers:
myapp-container:
[...]
State: Waiting
Reason: PodInitializing
Ready: False
[...]
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason
--------- -------- ----- ---- ------------- -------- ------
16s 16s 1 {default-scheduler } Normal Scheduled
16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started

To see logs for the init containers in this Pod, run:

https://kubernetes.io/docs/concepts/_print/ 100/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl logs myapp-pod -c init-myservice # Inspect the first init container


kubectl logs myapp-pod -c init-mydb # Inspect the second init container

At this point, those init containers will be waiting to discover Services named mydb and myservice .

Here's a configuration you can use to make those Services appear:

---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377

To create the mydb and myservice services:

kubectl apply -f services.yaml

The output is similar to this:

service/myservice created
service/mydb created

You'll then see that those init containers complete, and that the myapp-pod Pod moves into the Running state:

kubectl get -f myapp.yaml

The output is similar to this:

NAME READY STATUS RESTARTS AGE


myapp-pod 1/1 Running 0 9m

This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a
more detailed example.

Detailed behavior
During Pod startup, the kubelet delays running init containers until the networking and storage are ready. Then the kubelet runs the
Pod's init containers in the order they appear in the Pod's spec.

https://kubernetes.io/docs/concepts/_print/ 101/609
7/10/24, 9:28 AM Concepts | Kubernetes

Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with
failure, it is retried according to the Pod restartPolicy . However, if the Pod restartPolicy is set to Always, the init containers use
restartPolicy OnFailure.

A Pod cannot be Ready until all init containers have succeeded. The ports on an init container are not aggregated under a Service. A
Pod that is initializing is in the Pending state but should have a condition Initialized set to false.

If the Pod restarts, or is restarted, all init containers must execute again.

Changes to the init container spec are limited to the container image field. Altering an init container image field is equivalent to
restarting the Pod.

Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that
writes to files on EmptyDirs should be prepared for the possibility that an output file already exists.

Init containers have all of the fields of an app container. However, Kubernetes prohibits readinessProbe from being used because
init containers cannot define readiness distinct from completion. This is enforced during validation.

Use activeDeadlineSeconds on the Pod to prevent init containers from failing forever. The active deadline includes init containers.
However it is recommended to use activeDeadlineSeconds only if teams deploy their application as a Job, because
activeDeadlineSeconds has an effect even after initContainer finished. The Pod which is already running correctly would be killed by
activeDeadlineSeconds if you set.

The name of each app and init container in a Pod must be unique; a validation error is thrown for any container sharing a name with
another.

Resource sharing within containers


Given the order of execution for init, sidecar and app containers, the following rules for resource usage apply:

The highest of any particular resource request or limit defined on all init containers is the effective init request/limit. If any
resource has no resource limit specified this is considered as the highest limit.
The Pod's effective request/limit for a resource is the higher of:
the sum of all app containers request/limit for a resource
the effective init request/limit for a resource
Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that
are not used during the life of the Pod.
The QoS (quality of service) tier of the Pod's effective QoS tier is the QoS tier for init containers and app containers alike.

Quota and limits are applied based on the effective Pod request and limit.

Init containers and Linux cgroups


On Linux, resource allocations for Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as
the scheduler.

Pod restart reasons


A Pod can restart, causing re-execution of init containers, for the following reasons:

The Pod infrastructure container is restarted. This is uncommon and would have to be done by someone with root access to
nodes.
All containers in a Pod are terminated while restartPolicy is set to Always, forcing a restart, and the init container completion
record has been lost due to garbage collection.

The Pod will not be restarted when the init container image is changed, or the init container completion record has been lost due to
garbage collection. This applies for Kubernetes v1.20 and later. If you are using an earlier version of Kubernetes, consult the
documentation for the version you are using.

What's next
Learn more about the following:

Creating a Pod that has an init container.


https://kubernetes.io/docs/concepts/_print/ 102/609
7/10/24, 9:28 AM Concepts | Kubernetes

Debug init containers.


Overview of kubelet and kubectl.
Types of probes: liveness, readiness, startup probe.
Sidecar containers.

https://kubernetes.io/docs/concepts/_print/ 103/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.3 - Sidecar Containers


ⓘ FEATURE STATE: Kubernetes v1.29 [beta]

Sidecar containers are the secondary containers that run along with the main application container within the same Pod. These
containers are used to enhance or to extend the functionality of the primary app container by providing additional services, or
functionality such as logging, monitoring, security, or data synchronization, without directly altering the primary application code.

Typically, you only have one app container in a Pod. For example, if you have a web application that requires a local webserver, the
local webserver is a sidecar and the web application itself is the app container.

Sidecar containers in Kubernetes


Kubernetes implements sidecar containers as a special case of init containers; sidecar containers remain running after Pod startup.
This document uses the term regular init containers to clearly refer to containers that only run during Pod startup.

Provided that your cluster has the SidecarContainers feature gate enabled (the feature is active by default since Kubernetes v1.29),
you can specify a restartPolicy for containers listed in a Pod's initContainers field. These restartable sidecar containers are
independent from other init containers and from the main application container(s) within the same pod. These can be started,
stopped, or restarted without effecting the main application container and other init containers.

You can also run a Pod with multiple containers that are not marked as init or sidecar containers. This is appropriate if the
containers within the Pod are required for the Pod to work overall, but you don't need to control which containers start or stop first.
You could also do this if you need to support older versions of Kubernetes that don't support a container-level restartPolicy field.

Example application
Here's an example of a Deployment with two containers, one of which is a sidecar:

application/deployment-sidecar.yaml

https://kubernetes.io/docs/concepts/_print/ 104/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
restartPolicy: Always
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
emptyDir: {}

Sidecar containers and Pod lifecycle


If an init container is created with its restartPolicy set to Always , it will start and remain running during the entire life of the Pod.
This can be helpful for running supporting services separated from the main application containers.

If a readinessProbe is specified for this init container, its result will be used to determine the ready state of the Pod.

Since these containers are defined as init containers, they benefit from the same ordering and sequential guarantees as regular init
containers, allowing you to mix sidecar containers with regular init containers for complex Pod initialization flows.

Compared to regular init containers, sidecars defined within initContainers continue to run after they have started. This is
important when there is more than one entry inside .spec.initContainers for a Pod. After a sidecar-style init container is running
(the kubelet has set the started status for that init container to true), the kubelet then starts the next init container from the
ordered .spec.initContainers list. That status either becomes true because there is a process running in the container and no
startup probe defined, or as a result of its startupProbe succeeding.

Jobs with sidecar containers


If you define a Job that uses sidecar using Kubernetes-style init containers, the sidecar container in each Pod does not prevent the
Job from completing after the main container has finished.

Here's an example of a Job with two containers, one of which is a sidecar:

application/job/job-sidecar.yaml

https://kubernetes.io/docs/concepts/_print/ 105/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: myjob
image: alpine:latest
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
restartPolicy: Always
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
restartPolicy: Never
volumes:
- name: data
emptyDir: {}

Differences from application containers


Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic;
instead, they provide supporting functionality to the main application.

Sidecar containers have their own independent lifecycles. They can be started, stopped, and restarted independently of app
containers. This means you can update, scale, or maintain sidecar containers without affecting the primary application.

Sidecar containers share the same network and storage namespaces with the primary container. This co-location allows them to
interact closely and share resources.

Differences from init containers


Sidecar containers work alongside the main container, extending its functionality and providing additional services.

Sidecar containers run concurrently with the main application container. They are active throughout the lifecycle of the pod and can
be started and stopped independently of the main container. Unlike init containers, sidecar containers support probes to control
their lifecycle.

Sidecar containers can interact directly with the main application containers, because like init containers they always share the same
network, and can optionally also share volumes (filesystems).

Init containers stop before the main containers start up, so init containers cannot exchange messages with the app container in a
Pod. Any data passing is one-way (for example, an init container can put information inside an emptyDir volume).

Resource sharing within containers


Given the order of execution for init, sidecar and app containers, the following rules for resource usage apply:

The highest of any particular resource request or limit defined on all init containers is the effective init request/limit. If any
resource has no resource limit specified this is considered as the highest limit.
The Pod's effective request/limit for a resource is the sum of pod overhead and the higher of:

https://kubernetes.io/docs/concepts/_print/ 106/609
7/10/24, 9:28 AM Concepts | Kubernetes

the sum of all non-init containers(app and sidecar containers) request/limit for a resource
the effective init request/limit for a resource
Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that
are not used during the life of the Pod.
The QoS (quality of service) tier of the Pod's effective QoS tier is the QoS tier for all init, sidecar and app containers alike.

Quota and limits are applied based on the effective Pod request and limit.

Sidecar containers and Linux cgroups


On Linux, resource allocations for Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as
the scheduler.

What's next
Read a blog post on native sidecar containers.
Read about creating a Pod that has an init container.
Learn about the types of probes: liveness, readiness, startup probe.
Learn about pod overhead.

https://kubernetes.io/docs/concepts/_print/ 107/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.4 - Ephemeral Containers


ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

This page provides an overview of ephemeral containers: a special type of container that runs temporarily in an existing Pod to
accomplish user-initiated actions such as troubleshooting. You use ephemeral containers to inspect services rather than to build
applications.

Understanding ephemeral containers


Pods are the fundamental building block of Kubernetes applications. Since Pods are intended to be disposable and replaceable, you
cannot add a container to a Pod once it has been created. Instead, you usually delete and replace Pods in a controlled fashion using
deployments.

Sometimes it's necessary to inspect the state of an existing Pod, however, for example to troubleshoot a hard-to-reproduce bug. In
these cases you can run an ephemeral container in an existing Pod to inspect its state and run arbitrary commands.

What is an ephemeral container?


Ephemeral containers differ from other containers in that they lack guarantees for resources or execution, and they will never be
automatically restarted, so they are not appropriate for building applications. Ephemeral containers are described using the same
ContainerSpec as regular containers, but many fields are incompatible and disallowed for ephemeral containers.

Ephemeral containers may not have ports, so fields such as ports , livenessProbe , readinessProbe are disallowed.
Pod resource allocations are immutable, so setting resources is disallowed.
For a complete list of allowed fields, see the EphemeralContainer reference documentation.

Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to
pod.spec , so it's not possible to add an ephemeral container using kubectl edit .

Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod.

Note:
Ephemeral containers are not supported by static pods.

Uses for ephemeral containers


Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed
or a container image doesn't include debugging utilities.

In particular, distroless images enable you to deploy minimal container images that reduce attack surface and exposure to bugs and
vulnerabilities. Since distroless images do not include a shell or any debugging utilities, it's difficult to troubleshoot distroless images
using kubectl exec alone.

When using ephemeral containers, it's helpful to enable process namespace sharing so you can view processes in other containers.

What's next
Learn how to debug pods using ephemeral containers.

https://kubernetes.io/docs/concepts/_print/ 108/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.5 - Disruptions
This guide is for application owners who want to build highly available applications, and thus need to understand what types of
disruptions can happen to Pods.

It is also for cluster administrators who want to perform automated cluster actions, like upgrading and autoscaling clusters.

Voluntary and involuntary disruptions


Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system
software error.

We call these unavoidable cases involuntary disruptions to an application. Examples are:

a hardware failure of the physical machine backing the node


cluster administrator deletes VM (instance) by mistake
cloud provider or hypervisor failure makes VM disappear
a kernel panic
the node disappears from the cluster due to cluster network partition
eviction of a pod due to the node being out-of-resources.

Except for the out-of-resources condition, all these conditions should be familiar to most users; they are not specific to Kubernetes.

We call other cases voluntary disruptions. These include both actions initiated by the application owner and those initiated by a
Cluster Administrator. Typical application owner actions include:

deleting the deployment or other controller that manages the pod


updating a deployment's pod template causing a restart
directly deleting a pod (e.g. by accident)

Cluster administrator actions include:

Draining a node for repair or upgrade.


Draining a node from a cluster to scale the cluster down (learn about Cluster Autoscaling).
Removing a pod from a node to permit something else to fit on that node.

These actions might be taken directly by the cluster administrator, or by automation run by the cluster administrator, or by your
cluster hosting provider.

Ask your cluster administrator or consult your cloud provider or distribution documentation to determine if any sources of voluntary
disruptions are enabled for your cluster. If none are enabled, you can skip creating Pod Disruption Budgets.

Caution:
Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses
Pod Disruption Budgets.

Dealing with disruptions


Here are some ways to mitigate involuntary disruptions:

Ensure your pod requests the resources it needs.


Replicate your application if you need higher availability. (Learn about running replicated stateless and stateful applications.)
For even higher availability when running replicated applications, spread applications across racks (using anti-affinity) or across
zones (if using a multi-zone cluster.)

The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are no automated voluntary disruptions (only
user-triggered ones). However, your cluster administrator or hosting provider may run some additional services which cause
voluntary disruptions. For example, rolling out node software updates can cause voluntary disruptions. Also, some implementations

https://kubernetes.io/docs/concepts/_print/ 109/609
7/10/24, 9:28 AM Concepts | Kubernetes

of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes. Your cluster administrator or
hosting provider should have documented what level of voluntary disruptions, if any, to expect. Certain configuration options, such
as using PriorityClasses in your pod spec can also cause voluntary (and involuntary) disruptions.

Pod disruption budgets


ⓘ FEATURE STATE: Kubernetes v1.21 [stable]

Kubernetes offers features to help you run highly available applications even when you introduce frequent voluntary disruptions.

As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of Pods of a
replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would
like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might
want to ensure that the number of replicas serving load never falls below a certain percentage of the total.

Cluster managers and hosting providers should use tools which respect PodDisruptionBudgets by calling the Eviction API instead of
directly deleting pods or deployments.

For example, the kubectl drain subcommand lets you mark a node as going out of service. When you run kubectl drain , the tool
tries to evict all of the Pods on the Node you're taking out of service. The eviction request that kubectl submits on your behalf may
be temporarily rejected, so the tool periodically retries all failed requests until all Pods on the target node are terminated, or until a
configurable timeout is reached.

A PDB specifies the number of replicas that an application can tolerate having, relative to how many it is intended to have. For
example, a Deployment which has a .spec.replicas: 5 is supposed to have 5 pods at any given time. If its PDB allows for there to be
4 at a time, then the Eviction API will allow voluntary disruption of one (but not two) pods at a time.

The group of pods that comprise the application is specified using a label selector, the same as the one used by the application's
controller (deployment, stateful-set, etc).

The "intended" number of pods is computed from the .spec.replicas of the workload resource that is managing those pods. The
control plane discovers the owning workload resource by examining the .metadata.ownerReferences of the Pod.

Involuntary disruptions cannot be prevented by PDBs; however they do count against the budget.

Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but
workload resources (such as Deployment and StatefulSet) are not limited by PDBs when doing rolling upgrades. Instead, the
handling of failures during application updates is configured in the spec for the specific workload resource.

It is recommended to set AlwaysAllow Unhealthy Pod Eviction Policy to your PodDisruptionBudgets to support eviction of
misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the
drain can proceed.

When a pod is evicted using the eviction API, it is gracefully terminated, honoring the terminationGracePeriodSeconds setting in its
PodSpec.

PodDisruptionBudget example
Consider a cluster with 3 nodes, node-1 through node-3 . The cluster is running several applications. One of them has 3 replicas
initially called pod-a , pod-b , and pod-c . Another, unrelated pod without a PDB, called pod-x , is also shown. Initially, the pods are
laid out as follows:

node-1 node-2 node-3

pod-a available pod-b available pod-c available

pod-x available

All 3 pods are part of a deployment, and they collectively have a PDB which requires there be at least 2 of the 3 pods to be available
at all times.

https://kubernetes.io/docs/concepts/_print/ 110/609
7/10/24, 9:28 AM Concepts | Kubernetes

For example, assume the cluster administrator wants to reboot into a new kernel version to fix a bug in the kernel. The cluster
administrator first tries to drain node-1 using the kubectl drain command. That tool tries to evict pod-a and pod-x . This succeeds
immediately. Both pods go into the terminating state at the same time. This puts the cluster in this state:

node-1 draining node-2 node-3

pod-a terminating pod-b available pod-c available

pod-x terminating

The deployment notices that one of the pods is terminating, so it creates a replacement called pod-d . Since node-1 is cordoned, it
lands on another node. Something has also created pod-y as a replacement for pod-x .

(Note: for a StatefulSet, pod-a , which would be called something like pod-0 , would need to terminate completely before its
replacement, which is also called pod-0 but has a different UID, could be created. Otherwise, the example applies to a StatefulSet as
well.)

Now the cluster is in this state:

node-1 draining node-2 node-3

pod-a terminating pod-b available pod-c available

pod-x terminating pod-d starting pod-y

At some point, the pods terminate, and the cluster looks like this:

node-1 drained node-2 node-3

pod-b available pod-c available

pod-d starting pod-y

At this point, if an impatient cluster administrator tries to drain node-2 or node-3 , the drain command will block, because there are
only 2 available pods for the deployment, and its PDB requires at least 2. After some time passes, pod-d becomes available.

The cluster state now looks like this:

node-1 drained node-2 node-3

pod-b available pod-c available

pod-d available pod-y

Now, the cluster administrator tries to drain node-2 . The drain command will try to evict the two pods in some order, say pod-b first
and then pod-d . It will succeed at evicting pod-b . But, when it tries to evict pod-d , it will be refused because that would leave only
one pod available for the deployment.

The deployment creates a replacement for pod-b called pod-e . Because there are not enough resources in the cluster to schedule
pod-e the drain will again block. The cluster may end up in this state:

node-1 drained node-2 node-3 no node

pod-b terminating pod-c available pod-e pending

pod-d available pod-y

At this point, the cluster administrator needs to add a node back to the cluster to proceed with the upgrade.

You can see how Kubernetes varies the rate at which disruptions can happen, according to:

how many replicas an application needs


https://kubernetes.io/docs/concepts/_print/ 111/609
7/10/24, 9:28 AM Concepts | Kubernetes

how long it takes to gracefully shutdown an instance


how long it takes a new instance to start up
the type of controller
the cluster's resource capacity

Pod disruption conditions


ⓘ FEATURE STATE: Kubernetes v1.26 [beta]

Note:
In order to use this behavior, you must have the PodDisruptionConditions feature gate enabled in your cluster.

When enabled, a dedicated Pod DisruptionTarget condition is added to indicate that the Pod is about to be deleted due to a
disruption. The reason field of the condition additionally indicates one of the following reasons for the Pod termination:

PreemptionByScheduler

Pod is due to be preempted by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see
Pod priority preemption.

DeletionByTaintManager

Pod is due to be deleted by Taint Manager (which is part of the node lifecycle controller within kube-controller-manager) due to a
NoExecute taint that the Pod does not tolerate; see taint-based evictions.

EvictionByEvictionAPI

Pod has been marked for eviction using the Kubernetes API .

DeletionByPodGC

Pod, that is bound to a no longer existing Node, is due to be deleted by Pod garbage collection.

TerminationByKubelet

Pod has been terminated by the kubelet, because of either node pressure eviction or the graceful node shutdown.

Note:
A Pod disruption might be interrupted. The control plane might re-attempt to continue the disruption of the same Pod, but it is
not guaranteed. As a result, the DisruptionTarget condition might be added to a Pod, but that Pod might then not actually be
deleted. In such a situation, after some time, the Pod disruption condition will be cleared.

When the PodDisruptionConditions feature gate is enabled, along with cleaning up the pods, the Pod garbage collector (PodGC) will
also mark them as failed if they are in a non-terminal phase (see also Pod garbage collection).

When using a Job (or CronJob), you may want to use these Pod disruption conditions as part of your Job's Pod failure policy.

Separating Cluster Owner and Application Owner Roles


Often, it is useful to think of the Cluster Manager and Application Owner as separate roles with limited knowledge of each other. This
separation of responsibilities may make sense in these scenarios:

when there are many application teams sharing a Kubernetes cluster, and there is natural specialization of roles
when third-party tools or services are used to automate cluster management

Pod Disruption Budgets support this separation of roles by providing an interface between the roles.

If you do not have such a separation of responsibilities in your organization, you may not need to use Pod Disruption Budgets.

https://kubernetes.io/docs/concepts/_print/ 112/609
7/10/24, 9:28 AM Concepts | Kubernetes

How to perform Disruptive Actions on your Cluster


If you are a Cluster Administrator, and you need to perform a disruptive action on all the nodes in your cluster, such as a node or
system software upgrade, here are some options:

Accept downtime during the upgrade.


Failover to another complete replica cluster.
No downtime, but may be costly both for the duplicated nodes and for human effort to orchestrate the switchover.
Write disruption tolerant applications and use PDBs.
No downtime.
Minimal resource duplication.
Allows more automation of cluster administration.
Writing disruption-tolerant applications is tricky, but the work to tolerate voluntary disruptions largely overlaps with work
to support autoscaling and tolerating involuntary disruptions.

What's next
Follow steps to protect your application by configuring a Pod Disruption Budget.

Learn more about draining nodes

Learn about updating a deployment including steps to maintain its availability during the rollout.

https://kubernetes.io/docs/concepts/_print/ 113/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.6 - Pod Quality of Service Classes


This page introduces Quality of Service (QoS) classes in Kubernetes, and explains how Kubernetes assigns a QoS class to each Pod as a
consequence of the resource constraints that you specify for the containers in that Pod. Kubernetes relies on this classification to
make decisions about which Pods to evict when there are not enough available resources on a Node.

Quality of Service classes


Kubernetes classifies the Pods that you run and allocates each Pod into a specific quality of service (QoS) class. Kubernetes uses that
classification to influence how different pods are handled. Kubernetes does this classification based on the resource requests of the
Containers in that Pod, along with how those requests relate to resource limits. This is known as Quality of Service (QoS) class.
Kubernetes assigns every Pod a QoS class based on the resource requests and limits of its component Containers. QoS classes are
used by Kubernetes to decide which Pods to evict from a Node experiencing Node Pressure. The possible QoS classes are
Guaranteed , Burstable , and BestEffort . When a Node runs out of resources, Kubernetes will first evict BestEffort Pods running on
that Node, followed by Burstable and finally Guaranteed Pods. When this eviction is due to resource pressure, only Pods exceeding
resource requests are candidates for eviction.

Guaranteed
Pods that are Guaranteed have the strictest resource limits and are least likely to face eviction. They are guaranteed not to be killed
until they exceed their limits or there are no lower-priority Pods that can be preempted from the Node. They may not acquire
resources beyond their specified limits. These Pods can also make use of exclusive CPUs using the static CPU management policy.

Criteria
For a Pod to be given a QoS class of Guaranteed :

Every Container in the Pod must have a memory limit and a memory request.
For every Container in the Pod, the memory limit must equal the memory request.
Every Container in the Pod must have a CPU limit and a CPU request.
For every Container in the Pod, the CPU limit must equal the CPU request.

Burstable
Pods that are Burstable have some lower-bound resource guarantees based on the request, but do not require a specific limit. If a
limit is not specified, it defaults to a limit equivalent to the capacity of the Node, which allows the Pods to flexibly increase their
resources if resources are available. In the event of Pod eviction due to Node resource pressure, these Pods are evicted only after all
BestEffort Pods are evicted. Because a Burstable Pod can include a Container that has no resource limits or requests, a Pod that is
Burstable can try to use any amount of node resources.

Criteria
A Pod is given a QoS class of Burstable if:

The Pod does not meet the criteria for QoS class Guaranteed .
At least one Container in the Pod has a memory or CPU request or limit.

BestEffort
Pods in the BestEffort QoS class can use node resources that aren't specifically assigned to Pods in other QoS classes. For example,
if you have a node with 16 CPU cores available to the kubelet, and you assign 4 CPU cores to a Guaranteed Pod, then a Pod in the
BestEffort QoS class can try to use any amount of the remaining 12 CPU cores.

The kubelet prefers to evict BestEffort Pods if the node comes under resource pressure.

Criteria
A Pod has a QoS class of BestEffort if it doesn't meet the criteria for either Guaranteed or Burstable . In other words, a Pod is
BestEffort only if none of the Containers in the Pod have a memory limit or a memory request, and none of the Containers in the
Pod have a CPU limit or a CPU request. Containers in a Pod can request other resources (not CPU or memory) and still be classified
https://kubernetes.io/docs/concepts/_print/ 114/609
7/10/24, 9:28 AM Concepts | Kubernetes

as BestEffort .

Memory QoS with cgroup v2


ⓘ FEATURE STATE: Kubernetes v1.22 [alpha]

Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes. Memory requests and limits
of containers in pod are used to set specific interfaces memory.min and memory.high provided by the memory controller. When
memory.min is set to memory requests, memory resources are reserved and never reclaimed by the kernel; this is how Memory QoS
ensures memory availability for Kubernetes pods. And if memory limits are set in the container, this means that the system needs to
limit container memory usage; Memory QoS uses memory.high to throttle workload approaching its memory limit, ensuring that the
system is not overwhelmed by instantaneous memory allocation.

Memory QoS relies on QoS class to determine which settings to apply; however, these are different mechanisms that both provide
controls over quality of service.

Some behavior is independent of QoS class


Certain behavior is independent of the QoS class assigned by Kubernetes. For example:

Any Container exceeding a resource limit will be killed and restarted by the kubelet without affecting other Containers in that
Pod.

If a Container exceeds its resource request and the node it runs on faces resource pressure, the Pod it is in becomes a
candidate for eviction. If this occurs, all Containers in the Pod will be terminated. Kubernetes may create a replacement Pod,
usually on a different node.

The resource request of a Pod is equal to the sum of the resource requests of its component Containers, and the resource limit
of a Pod is equal to the sum of the resource limits of its component Containers.

The kube-scheduler does not consider QoS class when selecting which Pods to preempt. Preemption can occur when a cluster
does not have enough resources to run all the Pods you defined.

What's next
Learn about resource management for Pods and Containers.
Learn about Node-pressure eviction.
Learn about Pod priority and preemption.
Learn about Pod disruptions.
Learn how to assign memory resources to containers and pods.
Learn how to assign CPU resources to containers and pods.
Learn how to configure Quality of Service for Pods.

https://kubernetes.io/docs/concepts/_print/ 115/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.7 - User Namespaces


ⓘ FEATURE STATE: Kubernetes v1.30 [beta]

This page explains how user namespaces are used in Kubernetes pods. A user namespace isolates the user running inside the
container from the one in the host.

A process running as root in a container can run as a different (non-root) user in the host; in other words, the process has full
privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.

You can use this feature to reduce the damage a compromised container can do to the host or other pods in the same node. There
are several security vulnerabilities rated either HIGH or CRITICAL that were not exploitable when user namespaces is active. It is
expected user namespace will mitigate some future vulnerabilities too.

Before you begin


Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project
authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide
before submitting a change. More information.

This is a Linux-only feature and support is needed in Linux for idmap mounts on the filesystems used. This means:

On the node, the filesystem you use for /var/lib/kubelet/pods/ , or the custom directory you configure for this, needs idmap
mount support.
All the filesystems used in the pod's volumes must support idmap mounts.

In practice this means you need at least Linux 6.3, as tmpfs started supporting idmap mounts in that version. This is usually needed
as several Kubernetes features use tmpfs (the service account token that is mounted by default uses a tmpfs, Secrets use a tmpfs,
etc.)

Some popular filesystems that support idmap mounts in Linux 6.3 are: btrfs, ext4, xfs, fat, tmpfs, overlayfs.

In addition, the container runtime and its underlying OCI runtime must support user namespaces. The following OCI runtimes offer
support:

crun version 1.9 or greater (it's recommend version 1.13+).

Note:
Many OCI runtimes do not include the support needed for using user namespaces in Linux pods. If you use a managed
Kubernetes, or have downloaded it from packages and set it up, it's likely that nodes in your cluster use a runtime that doesn't
include this support. For example, the most widely used OCI runtime is runc , and version 1.1.z of runc doesn't support all the
features needed by the Kubernetes implementation of user namespaces.

If there is a newer release of runc than 1.1 available for use, check its documentation and release notes for compatibility (look
for idmap mounts support in particular, because that is the missing feature).

To use user namespaces with Kubernetes, you also need to use a CRI container runtime to use this feature with Kubernetes pods:

CRI-O: version 1.25 (and later) supports user namespaces for containers.

containerd v1.7 is not compatible with the userns support in Kubernetes v1.27 to v1.30. Kubernetes v1.25 and v1.26 used an earlier
implementation that is compatible with containerd v1.7, in terms of userns support. If you are using a version of Kubernetes other
than 1.30, check the documentation for that version of Kubernetes for the most relevant information. If there is a newer release of
containerd than v1.7 available for use, also check the containerd documentation for compatibility information.

You can see the status of user namespaces support in cri-dockerd tracked in an issue on GitHub.

https://kubernetes.io/docs/concepts/_print/ 116/609
7/10/24, 9:28 AM Concepts | Kubernetes

Introduction
User namespaces is a Linux feature that allows to map users in the container to different users in the host. Furthermore, the
capabilities granted to a pod in a user namespace are valid only in the namespace and void outside of it.

A pod can opt-in to use user namespaces by setting the pod.spec.hostUsers field to false .

The kubelet will pick host UIDs/GIDs a pod is mapped to, and will do so in a way to guarantee that no two pods on the same node
use the same mapping.

The runAsUser , runAsGroup , fsGroup , etc. fields in the pod.spec always refer to the user inside the container.

The valid UIDs/GIDs when this feature is enabled is the range 0-65535. This applies to files and processes ( runAsUser , runAsGroup ,
etc.).

Files using a UID/GID outside this range will be seen as belonging to the overflow ID, usually 65534 (configured in
/proc/sys/kernel/overflowuid and /proc/sys/kernel/overflowgid ). However, it is not possible to modify those files, even by running
as the 65534 user/group.

Most applications that need to run as root but don't access other host namespaces or resources, should continue to run fine without
any changes needed if user namespaces is activated.

Understanding user namespaces for pods


Several container runtimes with their default configuration (like Docker Engine, containerd, CRI-O) use Linux namespaces for
isolation. Other technologies exist and can be used with those runtimes too (e.g. Kata Containers uses VMs instead of Linux
namespaces). This page is applicable for container runtimes using Linux namespaces for isolation.

When creating a pod, by default, several new namespaces are used for isolation: a network namespace to isolate the network of the
container, a PID namespace to isolate the view of processes, etc. If a user namespace is used, this will isolate the users in the
container from the users in the node.

This means containers can run as root and be mapped to a non-root user on the host. Inside the container the process will think it is
running as root (and therefore tools like apt , yum , etc. work fine), while in reality the process doesn't have privileges on the host.
You can verify this, for example, if you check which user the container process is running by executing ps aux from the host. The
user ps shows is not the same as the user you see if you execute inside the container the command id .

This abstraction limits what can happen, for example, if the container manages to escape to the host. Given that the container is
running as a non-privileged user on the host, it is limited what it can do to the host.

Furthermore, as users on each pod will be mapped to different non-overlapping users in the host, it is limited what they can do to
other pods too.

Capabilities granted to a pod are also limited to the pod user namespace and mostly invalid out of it, some are even completely void.
Here are two examples:

CAP_SYS_MODULE does not have any effect if granted to a pod using user namespaces, the pod isn't able to load kernel modules.
CAP_SYS_ADMIN is limited to the pod's user namespace and invalid outside of it.

Without using a user namespace a container running as root, in the case of a container breakout, has root privileges on the node.
And if some capability were granted to the container, the capabilities are valid on the host too. None of this is true when we use user
namespaces.

If you want to know more details about what changes when user namespaces are in use, see man 7 user_namespaces .

Set up a node to support user namespaces


By default, the kubelet assigns pods UIDs/GIDs above the range 0-65535, based on the assumption that the host's files and
processes use UIDs/GIDs within this range, which is standard for most Linux distributions. This approach prevents any overlap
between the UIDs/GIDs of the host and those of the pods.

Avoiding the overlap is important to mitigate the impact of vulnerabilities such as CVE-2021-25741, where a pod can potentially read
arbitrary files in the host. If the UIDs/GIDs of the pod and the host don't overlap, it is limited what a pod would be able to do: the pod
UID/GID won't match the host's file owner/group.

https://kubernetes.io/docs/concepts/_print/ 117/609
7/10/24, 9:28 AM Concepts | Kubernetes

The kubelet can use a custom range for user IDs and group IDs for pods. To configure a custom range, the node needs to have:

A user kubelet in the system (you cannot use any other username here)
The binary getsubids installed (part of shadow-utils) and in the PATH for the kubelet binary.
A configuration of subordinate UIDs/GIDs for the kubelet user (see man 5 subuid and man 5 subgid).

This setting only gathers the UID/GID range configuration and does not change the user executing the kubelet .

You must follow some constraints for the subordinate ID range that you assign to the kubelet user:

The subordinate user ID, that starts the UID range for Pods, must be a multiple of 65536 and must also be greater than or
equal to 65536. In other words, you cannot use any ID from the range 0-65535 for Pods; the kubelet imposes this restriction to
make it difficult to create an accidentally insecure configuration.

The subordinate ID count must be a multiple of 65536

The subordinate ID count must be at least 65536 x <maxPods> where <maxPods> is the maximum number of pods that can run
on the node.

You must assign the same range for both user IDs and for group IDs, It doesn't matter if other users have user ID ranges that
don't align with the group ID ranges.

None of the assigned ranges should overlap with any other assignment.

The subordinate configuration must be only one line. In other words, you can't have multiple ranges.

For example, you could define /etc/subuid and /etc/subgid to both have these entries for the kubelet user:

# The format is
# name:firstID:count of IDs
# where
# - firstID is 65536 (the minimum value possible)
# - count of IDs is 110 (default limit for number of) * 65536
kubelet:65536:7208960

Integration with Pod security admission checks


ⓘ FEATURE STATE: Kubernetes v1.29 [alpha]

For Linux Pods that enable user namespaces, Kubernetes relaxes the application of Pod Security Standards in a controlled way. This
behavior can be controlled by the feature gate UserNamespacesPodSecurityStandards , which allows an early opt-in for end users.
Admins have to ensure that user namespaces are enabled by all nodes within the cluster if using the feature gate.

If you enable the associated feature gate and create a Pod that uses user namespaces, the following fields won't be constrained
even in contexts that enforce the Baseline or Restricted pod security standard. This behavior does not present a security concern
because root inside a Pod with user namespaces actually refers to the user inside the container, that is never mapped to a
privileged user on the host. Here's the list of fields that are not checks for Pods in those circumstances:

spec.securityContext.runAsNonRoot

spec.containers[*].securityContext.runAsNonRoot

spec.initContainers[*].securityContext.runAsNonRoot

spec.ephemeralContainers[*].securityContext.runAsNonRoot

spec.securityContext.runAsUser

spec.containers[*].securityContext.runAsUser

spec.initContainers[*].securityContext.runAsUser

spec.ephemeralContainers[*].securityContext.runAsUser

Limitations
When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false
then you are not allowed to set any of:
https://kubernetes.io/docs/concepts/_print/ 118/609
7/10/24, 9:28 AM Concepts | Kubernetes

hostNetwork: true

hostIPC: true

hostPID: true

What's next
Take a look at Use a User Namespace With a Pod

https://kubernetes.io/docs/concepts/_print/ 119/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.1.8 - Downward API


There are two ways to expose Pod and container fields to a running container: environment variables, and as
files that are populated by a special volume type. Together, these two ways of exposing Pod and container
fields are called the downward API.

It is sometimes useful for a container to have information about itself, without being overly coupled to Kubernetes. The downward
API allows containers to consume information about themselves or the cluster without using the Kubernetes client or API server.

An example is an existing application that assumes a particular well-known environment variable holds a unique identifier. One
possibility is to wrap the application, but that is tedious and error-prone, and it violates the goal of low coupling. A better option
would be to use the Pod's name as an identifier, and inject the Pod's name into the well-known environment variable.

In Kubernetes, there are two ways to expose Pod and container fields to a running container:

as environment variables
as files in a downwardAPI volume

Together, these two ways of exposing Pod and container fields are called the downward API.

Available fields
Only some Kubernetes API fields are available through the downward API. This section lists which fields you can make available.

You can pass information from available Pod-level fields using fieldRef . At the API level, the spec for a Pod always defines at least
one Container. You can pass information from available Container-level fields using resourceFieldRef .

Information available via fieldRef


For some Pod-level fields, you can provide them to a container either as an environment variable or using a downwardAPI volume.
The fields available via either mechanism are:

metadata.name

the pod's name

metadata.namespace

the pod's namespace

metadata.uid

the pod's unique ID

metadata.annotations['<KEY>']

the value of the pod's annotation named <KEY> (for example, metadata.annotations['myannotation'])

metadata.labels['<KEY>']

the text value of the pod's label named <KEY> (for example, metadata.labels['mylabel'])

The following information is available through environment variables but not as a downwardAPI volume fieldRef:

spec.serviceAccountName

the name of the pod's service account

spec.nodeName

the name of the node where the Pod is executing

status.hostIP

the primary IP address of the node to which the Pod is assigned

status.hostIPs
https://kubernetes.io/docs/concepts/_print/ 120/609
7/10/24, 9:28 AM Concepts | Kubernetes

the IP addresses is a dual-stack version of status.hostIP, the first is always the same as status.hostIP.

status.podIP

the pod's primary IP address (usually, its IPv4 address)

status.podIPs

the IP addresses is a dual-stack version of status.podIP, the first is always the same as status.podIP

The following information is available through a downwardAPI volume fieldRef , but not as environment variables:

metadata.labels

all of the pod's labels, formatted as label-key="escaped-label-value" with one label per line

metadata.annotations

all of the pod's annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line

Information available via resourceFieldRef


These container-level fields allow you to provide information about requests and limits for resources such as CPU and memory.

resource: limits.cpu

A container's CPU limit

resource: requests.cpu

A container's CPU request

resource: limits.memory

A container's memory limit

resource: requests.memory

A container's memory request

resource: limits.hugepages-*

A container's hugepages limit

resource: requests.hugepages-*

A container's hugepages request

resource: limits.ephemeral-storage

A container's ephemeral-storage limit

resource: requests.ephemeral-storage

A container's ephemeral-storage request

Fallback information for resource limits


If CPU and memory limits are not specified for a container, and you use the downward API to try to expose that information, then
the kubelet defaults to exposing the maximum allocatable value for CPU and memory based on the node allocatable calculation.

What's next
You can read about downwardAPI volumes.

You can try using the downward API to expose container- or Pod-level information:

as environment variables
as files in downwardAPI volume

https://kubernetes.io/docs/concepts/_print/ 121/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2 - Workload Management


Kubernetes provides several built-in APIs for declarative management of your workloads and the components of those workloads.

Ultimately, your applications run as containers inside Pods; however, managing individual Pods would be a lot of effort. For example,
if a Pod fails, you probably want to run a new Pod to replace it. Kubernetes can do that for you.

You use the Kubernetes API to create a workload object that represents a higher abstraction level than a Pod, and then the
Kubernetes control plane automatically manages Pod objects on your behalf, based on the specification for the workload object you
defined.

The built-in APIs for managing workloads are:

Deployment (and, indirectly, ReplicaSet), the most common way to run an application on your cluster. Deployment is a good fit for
managing a stateless application workload on your cluster, where any Pod in the Deployment is interchangeable and can be
replaced if needed. (Deployments are a replacement for the legacy ReplicationController API).

A StatefulSet lets you manage one or more Pods – all running the same application code – where the Pods rely on having a distinct
identity. This is different from a Deployment where the Pods are expected to be interchangeable. The most common use for a
StatefulSet is to be able to make a link between its Pods and their persistent storage. For example, you can run a StatefulSet that
associates each Pod with a PersistentVolume. If one of the Pods in the StatefulSet fails, Kubernetes makes a replacement Pod that is
connected to the same PersistentVolume.

A DaemonSet defines Pods that provide facilities that are local to a specific node; for example, a driver that lets containers on that
node access a storage system. You use a DaemonSet when the driver, or other node-level service, has to run on the node where it's
useful. Each Pod in a DaemonSet performs a role similar to a system daemon on a classic Unix / POSIX server. A DaemonSet might
be fundamental to the operation of your cluster, such as a plugin to let that node access cluster networking, it might help you to
manage the node, or it could provide less essential facilities that enhance the container platform you are running. You can run
DaemonSets (and their pods) across every node in your cluster, or across just a subset (for example, only install the GPU accelerator
driver on nodes that have a GPU installed).

You can use a Job and / or a CronJob to define tasks that run to completion and then stop. A Job represents a one-off task, whereas
each CronJob repeats according to a schedule.

Other topics in this section:

https://kubernetes.io/docs/concepts/_print/ 122/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.1 - Deployments
A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a
controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their
resources with new Deployments.

Note:
Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use
case is not covered below.

Use Case
The following are typical use cases for Deployments:

Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to
see if it succeeds or not.
Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the
Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet
updates the revision of the Deployment.
Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the
revision of the Deployment.
Scale up the Deployment to facilitate more load.
Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
Use the status of the Deployment as an indicator that a rollout has stuck.
Clean up older ReplicaSets that you don't need anymore.

Creating a Deployment
The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

controllers/nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

https://kubernetes.io/docs/concepts/_print/ 123/609
7/10/24, 9:28 AM Concepts | Kubernetes

In this example:

A Deployment named nginx-deployment is created, indicated by the .metadata.name field. This name will become the basis for
the ReplicaSets and Pods which are created later. See Writing a Deployment Spec for more details.

The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the .spec.replicas field.

The .spec.selector field defines how the created ReplicaSet finds which Pods to manage. In this case, you select a label that is
defined in the Pod template ( app: nginx ). However, more sophisticated selection rules are possible, as long as the Pod
template itself satisfies the rule.

Note:
The .spec.selector.matchLabels field is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value".
All of the requirements, from both matchLabels and matchExpressions, must be satisfied in order to match.

The template field contains the following sub-fields:

The Pods are labeled app: nginx using the .metadata.labels field.
The Pod template's specification, or .template.spec field, indicates that the Pods run one container, nginx , which runs
the nginx Docker Hub image at version 1.14.2.
Create one container and name it nginx using the .spec.template.spec.containers[0].name field.

Before you begin, make sure your Kubernetes cluster is up and running. Follow the steps given below to create the above
Deployment:

1. Create the Deployment by running the following command:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

2. Run kubectl get deployments to check if the Deployment was created.

If the Deployment is still being created, the output is similar to the following:

NAME READY UP-TO-DATE AVAILABLE AGE


nginx-deployment 0/3 0 0 1s

When you inspect the Deployments in your cluster, the following fields are displayed:

lists the names of the Deployments in the namespace.


NAME

READY displays how many replicas of the application are available to your users. It follows the pattern ready/desired.

UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state.

AVAILABLE displays how many replicas of the application are available to your users.

AGE displays the amount of time that the application has been running.

Notice how the number of desired replicas is 3 according to .spec.replicas field.

3. To see the Deployment rollout status, run kubectl rollout status deployment/nginx-deployment .

The output is similar to:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out

4. Run the kubectl get deployments again a few seconds later. The output is similar to this:

NAME READY UP-TO-DATE AVAILABLE AGE


nginx-deployment 3/3 3 3 18s

https://kubernetes.io/docs/concepts/_print/ 124/609
7/10/24, 9:28 AM Concepts | Kubernetes

Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template)
and available.
5. To see the ReplicaSet ( rs ) created by the Deployment, run kubectl get rs . The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-deployment-75675f5897 3 3 3 18s

ReplicaSet output shows the following fields:

lists the names of the ReplicaSets in the namespace.


NAME

DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment.
This is the desired state.
CURRENT displays how many replicas are currently running.

READY displays how many replicas of the application are available to your users.

AGE displays the amount of time that the application has been running.

Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[HASH] . This name will become the basis for
the Pods which are created.

The HASH string is the same as the pod-template-hash label on the ReplicaSet.

6. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels . The output is similar to:

NAME READY STATUS RESTARTS AGE LABELS


nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897

The created ReplicaSet ensures that there are three nginx Pods.

Note:
You must specify an appropriate selector and Pod template labels in a Deployment (in this case, app: nginx ).

Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't
stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave
unexpectedly.

Pod-template-hash label

Caution:
Do not change this label.

The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.

Updating a Deployment
Note:
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed, for example
if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a
rollout.

Follow the steps given below to update your Deployment:

https://kubernetes.io/docs/concepts/_print/ 125/609
7/10/24, 9:28 AM Concepts | Kubernetes

1. Let's update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.

kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

or use the following command:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and
nginx:1.16.1 indicates the new image and its tag.

The output is similar to:

deployment.apps/nginx-deployment image updated

Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to
nginx:1.16.1 :

kubectl edit deployment/nginx-deployment

The output is similar to:

deployment.apps/nginx-deployment edited

2. To see the rollout status, run:

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

or

deployment "nginx-deployment" successfully rolled out

Get more details on your updated Deployment:

After the rollout succeeds, you can view the Deployment by running kubectl get deployments . The output is similar to this:

NAME READY UP-TO-DATE AVAILABLE AGE


nginx-deployment 3/3 3 3 36s

Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas,
as well as scaling down the old ReplicaSet to 0 replicas.

kubectl get rs

The output is similar to this:


https://kubernetes.io/docs/concepts/_print/ 126/609
7/10/24, 9:28 AM Concepts | Kubernetes

NAME DESIRED CURRENT READY AGE


nginx-deployment-1564180365 3 3 3 6s
nginx-deployment-2035384211 0 0 0 36s

Running get pods should now show only the new Pods:

kubectl get pods

The output is similar to this:

NAME READY STATUS RESTARTS AGE


nginx-deployment-1564180365-khku8 1/1 Running 0 14s
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s

Next time you want to update these Pods, you only need to update the Deployment's Pod template again.

Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at
least 75% of the desired number of Pods are up (25% max unavailable).

Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it
ensures that at most 125% of the desired number of Pods are up (25% max surge).

For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod,
and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not
create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that
at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5.

Get details of your Deployment:

kubectl describe deployments

The output is similar to this:

https://kubernetes.io/docs/concepts/_print/ 127/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=2
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.16.1
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3
Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1
Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2
Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2
Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1
Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3
Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0

Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it
up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and
scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet to 2 and scaled up the new ReplicaSet to 2
so that at least 3 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down the
new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas in the new ReplicaSet,
and the old ReplicaSet is scaled down to 0.

Note:
Kubernetes doesn't count terminating Pods when calculating the number of availableReplicas, which must be between replicas
- maxUnavailable and replicas + maxSurge. As a result, you might notice that there are more Pods than expected during a rollout,
and that the total resources consumed by the Deployment is more than replicas + maxSurge until the
terminationGracePeriodSeconds of the terminating Pods expires.

Rollover (aka multiple updates in-flight)


Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the
Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not
match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled
to 0.

If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and
start scaling that up, and rolls over the ReplicaSet that it was scaling up previously -- it will add it to its list of old ReplicaSets and start
scaling it down.

For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2 , but then update the Deployment to create 5
replicas of nginx:1.16.1 , when only 3 replicas of nginx:1.14.2 had been created. In that case, the Deployment immediately starts
killing the 3 nginx:1.14.2 Pods that it had created, and starts creating nginx:1.16.1 Pods. It does not wait for the 5 replicas of
nginx:1.14.2 to be created before changing course.

https://kubernetes.io/docs/concepts/_print/ 128/609
7/10/24, 9:28 AM Concepts | Kubernetes

Label selector updates


It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need
to perform a label selector update, exercise great caution and make sure you have grasped all of the implications.

Note:
In API version apps/v1, a Deployment's label selector is immutable after it gets created.

Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, otherwise a
validation error is returned. This change is a non-overlapping one, meaning that the new selector does not select ReplicaSets
and Pods created with the old selector, resulting in orphaning all old ReplicaSets and creating a new ReplicaSet.
Selector updates changes the existing value in a selector key -- result in the same behavior as additions.
Selector removals removes an existing key from the Deployment selector -- do not require any changes in the Pod template
labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in
any existing Pods and ReplicaSets.

Rolling Back a Deployment


Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. By
default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want (you can change that
by modifying revision history limit).

Note:
A Deployment's revision is created when a Deployment's rollout is triggered. This means that the new revision is created if and
only if the Deployment's Pod template (.spec.template) is changed, for example if you update the labels or container images of
the template. Other updates, such as scaling the Deployment, do not create a Deployment revision, so that you can facilitate
simultaneous manual- or auto-scaling. This means that when you roll back to an earlier revision, only the Deployment's Pod
template part is rolled back.

Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of
nginx:1.16.1 :

kubectl set image deployment/nginx-deployment nginx=nginx:1.161

The output is similar to this:

deployment.apps/nginx-deployment image updated

The rollout gets stuck. You can verify it by checking the rollout status:

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 1 out of 3 new replicas have been updated...

Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.

You see that the number of old replicas (adding the replica count from nginx-deployment-1564180365 and nginx-deployment-
2035384211 ) is 3, and the number of new replicas (from nginx-deployment-3066724191 ) is 1.

https://kubernetes.io/docs/concepts/_print/ 129/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-deployment-1564180365 3 3 3 25s
nginx-deployment-2035384211 0 0 0 36s
nginx-deployment-3066724191 1 1 0 6s

Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.

kubectl get pods

The output is similar to this:

NAME READY STATUS RESTARTS AGE


nginx-deployment-1564180365-70iae 1/1 Running 0 25s
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s

Note:
The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on
the rollingUpdate parameters (maxUnavailable specifically) that you have specified. Kubernetes by default sets the value to
25%.

Get the description of the Deployment:

kubectl describe deployment

The output is similar to this:

https://kubernetes.io/docs/concepts/_print/ 130/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.161
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down repli
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down repli
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down repli
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica

To fix this, you need to rollback to a previous revision of Deployment that is stable.

Checking Rollout History of a Deployment


Follow the steps given below to check the rollout history:

1. First, check the revisions of this Deployment:

kubectl rollout history deployment/nginx-deployment

The output is similar to this:

deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161

CHANGE-CAUSEis copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can
specify the CHANGE-CAUSE message by:

Annotating the Deployment with kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated


to 1.16.1"

Manually editing the manifest of the resource.


2. To see the details of each revision, run:
https://kubernetes.io/docs/concepts/_print/ 131/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl rollout history deployment/nginx-deployment --revision=2

The output is similar to this:

deployments "nginx-deployment" revision 2


Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
Containers:
nginx:
Image: nginx:1.16.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.

Rolling Back to a Previous Revision


Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.

1. Now you've decided to undo the current rollout and rollback to the previous revision:

kubectl rollout undo deployment/nginx-deployment

The output is similar to this:

deployment.apps/nginx-deployment rolled back

Alternatively, you can rollback to a specific revision by specifying it with --to-revision :

kubectl rollout undo deployment/nginx-deployment --to-revision=2

The output is similar to this:

deployment.apps/nginx-deployment rolled back

For more details about rollout related commands, read kubectl rollout .

The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to
revision 2 is generated from Deployment controller.

2. Check if the rollback was successful and the Deployment is running as expected, run:

kubectl get deployment nginx-deployment

The output is similar to this:

NAME READY UP-TO-DATE AVAILABLE AGE


nginx-deployment 3/3 3 3 30m

3. Get the description of the Deployment:

https://kubernetes.io/docs/concepts/_print/ 132/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl describe deployment nginx-deployment

The output is similar to this:

Name: nginx-deployment
Namespace: default
CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=4
kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.16.1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0

Scaling a Deployment
You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

The output is similar to this:

deployment.apps/nginx-deployment scaled

Assuming horizontal Pod autoscaling is enabled in your cluster, you can set up an autoscaler for your Deployment and choose the
minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods.

https://kubernetes.io/docs/concepts/_print/ 133/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80

The output is similar to this:

deployment.apps/nginx-deployment scaled

Proportional scaling
RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales
a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), the Deployment controller balances the
additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

Ensure that the 10 replicas in your Deployment are running.

kubectl get deploy

The output is similar to this:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE


nginx-deployment 10 10 10 10 50s

You update to a new image which happens to be unresolvable from inside the cluster.

kubectl set image deployment/nginx-deployment nginx=nginx:sometag

The output is similar to this:

deployment.apps/nginx-deployment image updated

The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
maxUnavailable requirement that you mentioned above. Check out the rollout status:

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-deployment-1989198191 5 5 0 9s
nginx-deployment-618515232 8 8 8 1m

Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The
Deployment controller needs to decide where to add these new 5 replicas. If you weren't using proportional scaling, all 5 of
them would be added in the new ReplicaSet. With proportional scaling, you spread the additional replicas across all
ReplicaSets. Bigger proportions go to the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less
replicas. Any leftovers are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process
should eventually move all replicas to the new ReplicaSet, assuming the new replicas become healthy. To confirm this, run:

https://kubernetes.io/docs/concepts/_print/ 134/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl get deploy

The output is similar to this:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE


nginx-deployment 15 18 7 8 7m

The rollout status confirms how the replicas were added to each ReplicaSet.

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-deployment-1989198191 7 7 0 7m
nginx-deployment-618515232 11 11 11 7m

Pausing and Resuming a rollout of a Deployment


When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates.
When you're ready to apply those changes, you resume rollouts for the Deployment. This approach allows you to apply multiple
fixes in between pausing and resuming without triggering unnecessary rollouts.

For example, with a Deployment that was created:

Get the Deployment details:

kubectl get deploy

The output is similar to this:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE


nginx 3 3 3 3 1m

Get the rollout status:

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-2142116321 3 3 3 1m

Pause by running the following command:

kubectl rollout pause deployment/nginx-deployment

The output is similar to this:

https://kubernetes.io/docs/concepts/_print/ 135/609
7/10/24, 9:28 AM Concepts | Kubernetes

deployment.apps/nginx-deployment paused

Then update the image of the Deployment:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

The output is similar to this:

deployment.apps/nginx-deployment image updated

Notice that no new rollout started:

kubectl rollout history deployment/nginx-deployment

The output is similar to this:

deployments "nginx"
REVISION CHANGE-CAUSE
1 <none>

Get the rollout status to verify that the existing ReplicaSet has not changed:

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-2142116321 3 3 3 2m

You can make as many updates as you wish, for example, update the resources that will be used:

kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi

The output is similar to this:

deployment.apps/nginx-deployment resource requirements updated

The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will
not have any effect as long as the Deployment rollout is paused.

Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates:

kubectl rollout resume deployment/nginx-deployment

The output is similar to this:

deployment.apps/nginx-deployment resumed

Watch the status of the rollout until it's done.

https://kubernetes.io/docs/concepts/_print/ 136/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl get rs -w

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-2142116321 2 2 2 2m
nginx-3926361531 2 2 0 6s
nginx-3926361531 2 2 1 18s
nginx-2142116321 1 2 2 2m
nginx-2142116321 1 2 2 2m
nginx-3926361531 3 2 1 18s
nginx-3926361531 3 2 1 18s
nginx-2142116321 1 1 1 2m
nginx-3926361531 3 3 1 18s
nginx-3926361531 3 3 2 19s
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 20s

Get the status of the latest rollout:

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGE


nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 28s

Note:
You cannot rollback a paused Deployment until you resume it.

Deployment status
A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete,
or it can fail to progress.

Progressing Deployment
Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

The Deployment creates a new ReplicaSet.


The Deployment is scaling up its newest ReplicaSet.
The Deployment is scaling down its older ReplicaSet(s).
New Pods become ready or available (ready for at least MinReadySeconds).

When the rollout becomes “progressing”, the Deployment controller adds a condition with the following attributes to the
Deployment's .status.conditions :

type: Progressing

status: "True"

reason: NewReplicaSetCreated | reason: FoundNewReplicaSet | reason: ReplicaSetUpdated

You can monitor the progress for a Deployment by using kubectl rollout status .

https://kubernetes.io/docs/concepts/_print/ 137/609
7/10/24, 9:28 AM Concepts | Kubernetes

Complete Deployment
Kubernetes marks a Deployment as complete when it has the following characteristics:

All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
updates you've requested have been completed.
All of the replicas associated with the Deployment are available.
No old replicas for the Deployment are running.

When the rollout becomes “complete”, the Deployment controller sets a condition with the following attributes to the Deployment's
.status.conditions :

type: Progressing

status: "True"

reason: NewReplicaSetAvailable

This Progressing condition will retain a status value of "True" until a new rollout is initiated. The condition holds even when
availability of replicas changes (which does instead affect the Available condition).

You can check if a Deployment has completed by using kubectl rollout status . If the rollout completed successfully, kubectl
rollout status returns a zero exit code.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 of 3 updated replicas are available...


deployment "nginx-deployment" successfully rolled out

and the exit status from kubectl rollout is 0 (success):

echo $?

Failed Deployment
Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the
following factors:

Insufficient quota
Readiness probe failures
Image pull errors
Insufficient permissions
Limit ranges
Application runtime misconfiguration

One way you can detect this condition is to specify a deadline parameter in your Deployment spec: ( .spec.progressDeadlineSeconds ).
.spec.progressDeadlineSeconds denotes the number of seconds the Deployment controller waits before indicating (in the
Deployment status) that the Deployment progress has stalled.

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller report lack of progress of a
rollout for a Deployment after 10 minutes:

kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

https://kubernetes.io/docs/concepts/_print/ 138/609
7/10/24, 9:28 AM Concepts | Kubernetes

The output is similar to this:

deployment.apps/nginx-deployment patched

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to the
Deployment's .status.conditions :

type: Progressing

status: "False"

reason: ProgressDeadlineExceeded

This condition can also fail early and is then set to status value of "False" due to reasons as ReplicaSetCreateError . Also, the
deadline is not taken into account anymore once the Deployment rollout completes.

See the Kubernetes API conventions for more information on status conditions.

Note:
Kubernetes takes no action on a stalled Deployment other than to report a status condition with reason:
ProgressDeadlineExceeded. Higher level orchestrators can take advantage of it and act accordingly, for example, rollback the
Deployment to its previous version.

Note:
If you pause a Deployment rollout, Kubernetes does not check progress against your specified deadline. You can safely pause a
Deployment rollout in the middle of a rollout and resume without triggering the condition for exceeding the deadline.

You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind
of error that can be treated as transient. For example, let's suppose you have insufficient quota. If you describe the Deployment you
will notice the following section:

kubectl describe deployment nginx-deployment

The output is similar to this:

<...>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
ReplicaFailure True FailedCreate
<...>

If you run kubectl get deployment nginx-deployment -o yaml , the Deployment status is similar to this:

https://kubernetes.io/docs/concepts/_print/ 139/609
7/10/24, 9:28 AM Concepts | Kubernetes

status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: Replica set "nginx-deployment-4262182780" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
- lastTransitionTime: 2016-10-04T12:25:42Z
lastUpdateTime: 2016-10-04T12:25:42Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 3
replicas: 2
unavailableReplicas: 2

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing
condition:

Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
ReplicaFailure True FailedCreate

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be
running, or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then
completes the Deployment rollout, you'll see the Deployment's status update with a successful condition ( status: "True" and
reason: NewReplicaSetAvailable ).

Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable

type: Available with status: "True" means that your Deployment has minimum availability. Minimum availability is dictated by the
parameters specified in the deployment strategy. type: Progressing with status: "True" means that your Deployment is either in
the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum required new
replicas are available (see the Reason of the condition for the particulars - in our case reason: NewReplicaSetAvailable means that
the Deployment is complete).

You can check if a Deployment has failed to progress by using kubectl rollout status . kubectl rollout status returns a non-zero
exit code if the Deployment has exceeded the progression deadline.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline

https://kubernetes.io/docs/concepts/_print/ 140/609
7/10/24, 9:28 AM Concepts | Kubernetes

and the exit status from kubectl rollout is 1 (indicating an error):

echo $?

Operating on a failed deployment


All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back to a previous
revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.

Clean up Policy
You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets for this Deployment you want to
retain. The rest will be garbage-collected in the background. By default, it is 10.

Note:
Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment thus that Deployment will not be able
to roll back.

Canary Deployment
If you want to roll out releases to a subset of users or servers using the Deployment, you can create multiple Deployments, one for
each release, following the canary pattern described in managing resources.

Writing a Deployment Spec


As with all other Kubernetes configs, a Deployment needs .apiVersion , .kind , and .metadata fields. For general information about
working with config files, see deploying applications, configuring containers, and using kubectl to manage resources documents.

When the control plane creates new Pods for a Deployment, the .metadata.name of the Deployment is part of the basis for naming
those Pods. The name of a Deployment must be a valid DNS subdomain value, but this can produce unexpected results for the Pod
hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.

A Deployment also needs a .spec section.

Pod Template
The .spec.template and .spec.selector are the only required fields of the .spec .

The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion
or kind .

In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate labels and an appropriate restart
policy. For labels, make sure not to overlap with other controllers. See selector.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

Replicas
.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

Should you manually scale a Deployment, example via kubectl scale deployment deployment --replicas=X , and then you update that
Deployment based on a manifest (for example: by running kubectl apply -f deployment.yaml ), then applying that manifest
overwrites the manual scaling that you previously did.

https://kubernetes.io/docs/concepts/_print/ 141/609
7/10/24, 9:28 AM Concepts | Kubernetes

If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for a Deployment, don't set
.spec.replicas .

Instead, allow the Kubernetes control plane to manage the .spec.replicas field automatically.

Selector
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.

.spec.selector must match .spec.template.metadata.labels , or it will be rejected by the API.

In API version apps/v1 , .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. So they
must be set explicitly. Also note that .spec.selector is immutable after creation of the Deployment in apps/v1 .

A Deployment may terminate Pods whose labels match the selector if their template is different from .spec.template or if the total
number of such Pods exceeds .spec.replicas . It brings up new Pods with .spec.template if the number of Pods is less than the
desired number.

Note:
You should not create other Pods whose labels match this selector, either directly, by creating another Deployment, or by
creating another controller such as a ReplicaSet or a ReplicationController. If you do so, the first Deployment thinks that it
created these other Pods. Kubernetes does not stop you from doing this.

If you have multiple controllers that have overlapping selectors, the controllers will fight with each other and won't behave correctly.

Strategy
.spec.strategyspecifies the strategy used to replace old Pods by new ones. .spec.strategy.type can be "Recreate" or
"RollingUpdate". "RollingUpdate" is the default value.

Recreate Deployment
All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate .

Note:
This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Pods of the old
revision will be terminated immediately. Successful removal is awaited before any Pod of the new revision is created. If you
manually delete a Pod, the lifecycle is controlled by the ReplicaSet and the replacement will be created immediately (even if the
old Pod is still in a Terminating state). If you need an "at most" guarantee for your Pods, you should consider using a StatefulSet.

Rolling Update Deployment


The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate . You can specify maxUnavailable
and maxSurge to control the rolling update process.

Max Unavailable
.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that can be unavailable
during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example,
10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if
.spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the
rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new
ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.

Max Surge
.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Pods that can be created over the
desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%).
The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default
https://kubernetes.io/docs/concepts/_print/ 142/609
7/10/24, 9:28 AM Concepts | Kubernetes

value is 25%.

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such
that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new
ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130%
of desired Pods.

Here are some Rolling Update Deployment examples that use the maxUnavailable and maxSurge :

Max Unavailable Max Surge Hybrid

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1

Progress Deadline Seconds


.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to
progress before the system reports back that the Deployment has failed progressing - surfaced as a condition with type:
Progressing , status: "False" . and reason: ProgressDeadlineExceeded in the status of the resource. The Deployment controller will
keep retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment
controller will roll back a Deployment as soon as it observes such a condition.

If specified, this field needs to be greater than .spec.minReadySeconds .

Min Ready Seconds


.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newly created Pod should be
ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available
as soon as it is ready). To learn more about when a Pod is considered ready, see Container Probes.

Revision History Limit


A Deployment's revision history is stored in the ReplicaSets it controls.

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback. These old
ReplicaSets consume resources in etcd and crowd the output of kubectl get rs . The configuration of each Deployment revision is
stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment.
By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.

https://kubernetes.io/docs/concepts/_print/ 143/609
7/10/24, 9:28 AM Concepts | Kubernetes

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new
Deployment rollout cannot be undone, since its revision history is cleaned up.

Paused
.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference between a paused
Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused Deployment will not trigger
new rollouts as long as it is paused. A Deployment is not paused by default when it is created.

What's next
Learn more about Pods.
Run a stateless application using a Deployment.
Read the Deployment to understand the Deployment API.
Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.
Use kubectl to create a Deployment.

https://kubernetes.io/docs/concepts/_print/ 144/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.2 - ReplicaSet
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. Usually, you define
a Deployment and let that Deployment manage ReplicaSets automatically.

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the
availability of a specified number of identical Pods.

How a ReplicaSet works


A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas
indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the
number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired
number. When a ReplicaSet needs to create new Pods, it uses its Pod template.

A ReplicaSet is linked to its Pods via the Pods' metadata.ownerReferences field, which specifies what resource the current object is
owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences
field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.

A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet.

When to use a ReplicaSet


A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level
concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we
recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't
require updates at all.

This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your
application in the spec section.

Example
controllers/frontend.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5

https://kubernetes.io/docs/concepts/_print/ 145/609
7/10/24, 9:28 AM Concepts | Kubernetes

Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods
that it manages.

kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml

You can then get the current ReplicaSets deployed:

kubectl get rs

And see the frontend one you created:

NAME DESIRED CURRENT READY AGE


frontend 3 3 3 6s

You can also check on the state of the ReplicaSet:

kubectl describe rs/frontend

And you will see output similar to:

Name: frontend
Namespace: default
Selector: tier=frontend
Labels: app=guestbook
tier=frontend
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: tier=frontend
Containers:
php-redis:
Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx
Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57
Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w

And lastly you can check for the Pods brought up:

kubectl get pods

You should see Pod information similar to:

https://kubernetes.io/docs/concepts/_print/ 146/609
7/10/24, 9:28 AM Concepts | Kubernetes

NAME READY STATUS RESTARTS AGE


frontend-gbgfx 1/1 Running 0 10m
frontend-rwz57 1/1 Running 0 10m
frontend-wkl7w 1/1 Running 0 10m

You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the
Pods running:

kubectl get pods frontend-gbgfx -o yaml

The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-02-28T22:30:44Z"
generateName: frontend-
labels:
tier: frontend
name: frontend-gbgfx
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: frontend
uid: e129deca-f864-481b-bb16-b27abfd92292
...

Non-Template Pod acquisitions


While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels
which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited to owning Pods
specified by its template-- it can acquire other Pods in the manner specified in the previous sections.

Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:

pods/pod-rs.yaml

https://kubernetes.io/docs/concepts/_print/ 147/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
tier: frontend
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:2.0

---

apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
tier: frontend
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:1.0

As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet,
they will immediately be acquired by it.

Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to fulfill its
replica count requirement:

kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml

The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired
count.

Fetching the Pods:

kubectl get pods

The output shows that the new Pods are either already terminated, or in the process of being terminated:

NAME READY STATUS RESTARTS AGE


frontend-b2zdv 1/1 Running 0 10m
frontend-vcmts 1/1 Running 0 10m
frontend-wtsmm 1/1 Running 0 10m
pod1 0/1 Terminating 0 1s
pod2 0/1 Terminating 0 1s

If you create the Pods first:

kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml

And then create the ReplicaSet however:

https://kubernetes.io/docs/concepts/_print/ 148/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml

You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its
new Pods and the original matches its desired count. As fetching the Pods:

kubectl get pods

Will reveal in its output:

NAME READY STATUS RESTARTS AGE


frontend-hmmj2 1/1 Running 0 9s
pod1 1/1 Running 0 36s
pod2 1/1 Running 0 36s

In this manner, a ReplicaSet can own a non-homogeneous set of Pods

Writing a ReplicaSet manifest


As with all other Kubernetes API objects, a ReplicaSet needs the apiVersion , kind , and metadata fields. For ReplicaSets, the kind is
always a ReplicaSet.

When the control plane creates new Pods for a ReplicaSet, the .metadata.name of the ReplicaSet is part of the basis for naming those
Pods. The name of a ReplicaSet must be a valid DNS subdomain value, but this can produce unexpected results for the Pod
hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.

A ReplicaSet also needs a .spec section.

Pod Template
The .spec.template is a pod template which is also required to have labels in place. In our frontend.yaml example we had one label:
tier: frontend . Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.

For the template's restart policy field, .spec.template.spec.restartPolicy , the only allowed value is Always , which is the default.

Pod Selector
The .spec.selector field is a label selector. As discussed earlier these are the labels used to identify potential Pods to acquire. In our
frontend.yaml example, the selector was:

matchLabels:
tier: frontend

In the ReplicaSet, .spec.template.metadata.labels must match spec.selector , or it will be rejected by the API.

Note:
For 2 ReplicaSets specifying the same .spec.selector but different .spec.template.metadata.labels and .spec.template.spec fields,
each ReplicaSet ignores the Pods created by the other ReplicaSet.

Replicas
You can specify how many Pods should run concurrently by setting .spec.replicas . The ReplicaSet will create/delete its Pods to
match this number.

If you do not specify .spec.replicas , then it defaults to 1.

https://kubernetes.io/docs/concepts/_print/ 149/609
7/10/24, 9:28 AM Concepts | Kubernetes

Working with ReplicaSets


Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Pods, use kubectl delete . The Garbage collector automatically deletes all of the dependent Pods
by default.

When using the REST API or the client-go library, you must set propagationPolicy to Background or Foreground in the -d option.
For example:

kubectl proxy --port=8080


curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"

Deleting just a ReplicaSet


You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the
REST API or the client-go library, you must set propagationPolicy to Orphan . For example:

kubectl proxy --port=8080


curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"

Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same,
then the new one will adopt the old Pods. However, it will not make any effort to make existing Pods match a new, different pod
template. To update Pods to a new spec in a controlled way, use a Deployment, as ReplicaSets do not support a rolling update
directly.

Isolating Pods from a ReplicaSet


You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove Pods from service for
debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically ( assuming that the number of
replicas is not also changed).

Scaling a ReplicaSet
A ReplicaSet can be easily scaled up or down by simply updating the .spec.replicas field. The ReplicaSet controller ensures that a
desired number of Pods with a matching label selector are available and operational.

When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to prioritize scaling down
pods based on the following general algorithm:

1. Pending (and unschedulable) pods are scaled down first


2. If controller.kubernetes.io/pod-deletion-cost annotation is set, then the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently comes before the older pod (the creation times are
bucketed on an integer log scale when the LogarithmicScaleDown feature gate is enabled)

If all of the above match, then selection is random.

Pod deletion cost

ⓘ FEATURE STATE: Kubernetes v1.22 [beta]

Using the controller.kubernetes.io/pod-deletion-cost annotation, users can set a preference regarding which pods to remove first
when downscaling a ReplicaSet.
https://kubernetes.io/docs/concepts/_print/ 150/609
7/10/24, 9:28 AM Concepts | Kubernetes

The annotation should be set on the pod, the range is [-2147483648, 2147483647]. It represents the cost of deleting a pod compared
to other pods belonging to the same ReplicaSet. Pods with lower deletion cost are preferred to be deleted before pods with higher
deletion cost.

The implicit value for this annotation for pods that don't set it is 0; negative values are permitted. Invalid values will be rejected by
the API server.

This feature is beta and enabled by default. You can disable it using the feature gate PodDeletionCost in both kube-apiserver and
kube-controller-manager.

Note:
This is honored on a best-effort basis, so it does not offer any guarantees on pod deletion order.
Users should avoid updating the annotation frequently, such as updating it based on a metric value, because doing so will
generate a significant number of pod updates on the apiserver.

Example Use Case


The different pods of an application could have different utilization levels. On scale down, the application may prefer to remove the
pods with lower utilization. To avoid frequently updating the pods, the application should update controller.kubernetes.io/pod-
deletion-cost once before issuing a scale down (setting the annotation to a value proportional to pod utilization level). This works if
the application itself controls the down scaling; for example, the driver pod of a Spark deployment.

ReplicaSet as a Horizontal Pod Autoscaler Target


A ReplicaSet can also be a target for Horizontal Pod Autoscalers (HPA). That is, a ReplicaSet can be auto-scaled by an HPA. Here is an
example HPA targeting the ReplicaSet we created in the previous example.

controllers/hpa-rs.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: frontend-scaler
spec:
scaleTargetRef:
kind: ReplicaSet
name: frontend
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50

Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the
target ReplicaSet depending on the CPU usage of the replicated Pods.

kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml

Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)

kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50

https://kubernetes.io/docs/concepts/_print/ 151/609
7/10/24, 9:28 AM Concepts | Kubernetes

Alternatives to ReplicaSet
Deployment (recommended)
Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates.
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that they
create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want
ReplicaSets.

Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as
in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a
ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple
Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some
agent on the node such as Kubelet.

Job
Use a Job instead of a ReplicaSet for Pods that are expected to terminate on their own (that is, batch jobs).

DaemonSet
Use a DaemonSet instead of a ReplicaSet for Pods that provide a machine-level function, such as machine monitoring or machine
logging. These Pods have a lifetime that is tied to a machine lifetime: the Pod needs to be running on the machine before other Pods
start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

ReplicationController
ReplicaSets are the successors to ReplicationControllers. The two serve the same purpose, and behave similarly, except that a
ReplicationController does not support set-based selector requirements as described in the labels user guide. As such, ReplicaSets
are preferred over ReplicationControllers

What's next
Learn about Pods.
Learn about Deployments.
Run a Stateless Application Using a Deployment, which relies on ReplicaSets to work.
ReplicaSet is a top-level resource in the Kubernetes REST API. Read the ReplicaSet object definition to understand the API for
replica sets.
Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.

https://kubernetes.io/docs/concepts/_print/ 152/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.3 - StatefulSets
A StatefulSet runs a group of Pods, and maintains a sticky identity for each of those Pods. This is useful for
managing applications that need persistent storage or a stable, unique network identity.

StatefulSet is the workload API object used to manage stateful applications.

Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet
maintains a sticky identity for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a
persistent identifier that it maintains across any rescheduling.

If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution.
Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing
volumes to the new Pods that replace any that have failed.

Using StatefulSets
StatefulSets are valuable for applications that require one or more of the following.

Stable, unique network identifiers.


Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, automated rolling updates.

In the above, stable is synonymous with persistence across Pod (re)scheduling. If an application doesn't require any stable identifiers
or ordered deployment, deletion, or scaling, you should deploy your application using a workload object that provides a set of
stateless replicas. Deployment or ReplicaSet may be better suited to your stateless needs.

Limitations
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner (examples here) based on the
requested storage class, or pre-provisioned by an admin.
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure
data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for
creating this Service.
StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and
graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion.
When using Rolling Updates with the default Pod Management Policy ( OrderedReady ), it's possible to get into a broken state
that requires manual intervention to repair.

Components
The example below demonstrates the components of a StatefulSet.

https://kubernetes.io/docs/concepts/_print/ 153/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
minReadySeconds: 10 # by default is 0
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.24
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi

Note:
This example uses the ReadWriteOnce access mode, for simplicity. For production use, the Kubernetes project recommends using
the ReadWriteOncePod access mode instead.

In the above example:

A Headless Service, named nginx , is used to control the network domain.


The StatefulSet, named web , has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.

The name of a StatefulSet object must be a valid DNS label.

https://kubernetes.io/docs/concepts/_print/ 154/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pod Selector
You must set the .spec.selector field of a StatefulSet to match the labels of its .spec.template.metadata.labels . Failing to specify a
matching Pod Selector will result in a validation error during StatefulSet creation.

Volume Claim Templates


You can set the .spec.volumeClaimTemplates field to create a PersistentVolumeClaim. This will provide stable storage to the
StatefulSet if either

The StorageClass specified for the volume claim is set up to use dynamic provisioning, or
The cluster already contains a PersistentVolume with the correct StorageClass and sufficient available storage space.

Minimum ready seconds

ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newly created Pod should be
running and ready without any of its containers crashing, for it to be considered available. This is used to check progression of a
rollout when using a Rolling Update strategy. This field defaults to 0 (the Pod will be considered available as soon as it is ready). To
learn more about when a Pod is considered ready, see Container Probes.

Pod Identity
StatefulSet Pods have a unique identity that consists of an ordinal, a stable network identity, and stable storage. The identity sticks to
the Pod, regardless of which node it's (re)scheduled on.

Ordinal Index
For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, that is unique over the Set. By
default, pods will be assigned ordinals from 0 up through N-1. The StatefulSet controller will also add a pod label with this index:
apps.kubernetes.io/pod-index .

Start ordinal

ⓘ FEATURE STATE: Kubernetes v1.27 [beta]

.spec.ordinalsis an optional field that allows you to configure the integer ordinals assigned to each Pod. It defaults to nil. You must
enable the StatefulSetStartOrdinal feature gate to use this field. Once enabled, you can configure the following options:

.spec.ordinals.start : If the .spec.ordinals.start field is set, Pods will be assigned ordinals from .spec.ordinals.start up
through .spec.ordinals.start + .spec.replicas - 1 .

Stable Network ID
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal) . The example above will create three Pods named web-0,web-1,web-2 . A
StatefulSet can use a Headless Service to control the domain of its Pods. The domain managed by this Service takes the form:
$(service name).$(namespace).svc.cluster.local , where "cluster.local" is the cluster domain. As each Pod is created, it gets a
matching DNS subdomain, taking the form: $(podname).$(governing service domain) , where the governing service is defined by the
serviceName field on the StatefulSet.

Depending on how DNS is configured in your cluster, you may not be able to look up the DNS name for a newly-run Pod
immediately. This behavior can occur when other clients in the cluster have already sent queries for the hostname of the Pod before
it was created. Negative caching (normal in DNS) means that the results of previous failed lookups are remembered and reused,
even after the Pod is running, for at least a few seconds.

If you need to discover Pods promptly after they are created, you have a few options:

https://kubernetes.io/docs/concepts/_print/ 155/609
7/10/24, 9:28 AM Concepts | Kubernetes

Query the Kubernetes API directly (for example, using a watch) rather than relying on DNS lookups.
Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the config map for CoreDNS, which
currently caches for 30 seconds).

As mentioned in the limitations section, you are responsible for creating the Headless Service responsible for the network identity of
the pods.

Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and how that affects the DNS names for
the StatefulSet's Pods.

Cluster Service StatefulSet Pod


Domain (ns/name) (ns/name) StatefulSet Domain Pod DNS Hostname

cluster.local default/nginx default/web nginx.default.svc.cluster.local web-{0..N- web-{0..N-


1}.nginx.default.svc.cluster.local 1}

cluster.local foo/nginx foo/web nginx.foo.svc.cluster.local web-{0..N- web-{0..N-


1}.nginx.foo.svc.cluster.local 1}

kube.local foo/nginx foo/web nginx.foo.svc.kube.local web-{0..N- web-{0..N-


1}.nginx.foo.svc.kube.local 1}

Note:
Cluster Domain will be set to cluster.local unless otherwise configured.

Stable Storage
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example
above, each Pod receives a single PersistentVolume with a StorageClass of my-storage-class and 1 GiB of provisioned storage. If no
StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its volumeMounts
mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.

Pod Name Label


When the StatefulSet controller creates a Pod, it adds a label, statefulset.kubernetes.io/pod-name , that is set to the name of the Pod.
This label allows you to attach a Service to a specific Pod in the StatefulSet.

Pod index label

ⓘ FEATURE STATE: Kubernetes v1.28 [beta]

When the StatefulSet controller creates a Pod, the new Pod is labelled with apps.kubernetes.io/pod-index . The value of this label is
the ordinal index of the Pod. This label allows you to route traffic to a particular pod index, filter logs/metrics using the pod index
label, and more. Note the feature gate PodIndexLabel must be enabled for this feature, and it is enabled by default.

Deployment and Scaling Guarantees


For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
Before a Pod is terminated, all of its successors must be completely shutdown.

The StatefulSet should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This practice is unsafe and strongly discouraged.
For further explanation, please refer to force deleting StatefulSet Pods.

https://kubernetes.io/docs/concepts/_print/ 156/609
7/10/24, 9:28 AM Concepts | Kubernetes

When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed
before web-0 is Running and Ready, and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after
web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
becomes Running and Ready.

If a user were to scale the deployed example by patching the StatefulSet such that replicas=1 , web-2 would be terminated first.
web-1 would not be terminated until web-2 is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated until web-0 is Running and Ready.

Pod Management Policies


StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its
.spec.podManagementPolicy field.

OrderedReady Pod Management


OrderedReady pod management is the default for StatefulSets. It implements the behavior described above.

Parallel Pod Management


Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and to not wait for Pods to
become Running and Ready or completely terminated prior to launching or terminating another Pod. This option only affects the
behavior for scaling operations. Updates are not affected.

Update strategies
A StatefulSet's .spec.updateStrategy field allows you to configure and disable automated rolling updates for containers, labels,
resource request/limits, and annotations for the Pods in a StatefulSet. There are two possible values:

OnDelete

When a StatefulSet's .spec.updateStrategy.type is set to OnDelete, the StatefulSet controller will not automatically update the Pods
in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a
StatefulSet's .spec.template.

RollingUpdate

The RollingUpdate update strategy implements automated, rolling updates for the Pods in a StatefulSet. This is the default update
strategy.

Rolling Updates
When a StatefulSet's .spec.updateStrategy.type is set to RollingUpdate , the StatefulSet controller will delete and recreate each Pod
in the StatefulSet. It will proceed in the same order as Pod termination (from the largest ordinal to the smallest), updating each Pod
one at a time.

The Kubernetes control plane waits until an updated Pod is Running and Ready prior to updating its predecessor. If you have set
.spec.minReadySeconds (see Minimum Ready Seconds), the control plane additionally waits that amount of time after the Pod turns
ready, before moving on.

Partitioned rolling updates


The RollingUpdate update strategy can be partitioned, by specifying a .spec.updateStrategy.rollingUpdate.partition . If a partition is
specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet's .spec.template
is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be
recreated at the previous version. If a StatefulSet's .spec.updateStrategy.rollingUpdate.partition is greater than its .spec.replicas ,
updates to its .spec.template will not be propagated to its Pods. In most cases you will not need to use a partition, but they are
useful if you want to stage an update, roll out a canary, or perform a phased roll out.

https://kubernetes.io/docs/concepts/_print/ 157/609
7/10/24, 9:28 AM Concepts | Kubernetes

Maximum unavailable Pods

ⓘ FEATURE STATE: Kubernetes v1.24 [alpha]

You can control the maximum number of Pods that can be unavailable during an update by specifying the
.spec.updateStrategy.rollingUpdate.maxUnavailable field. The value can be an absolute number (for example, 5 ) or a percentage of
desired Pods (for example, 10% ). Absolute number is calculated from the percentage value by rounding it up. This field cannot be 0.
The default setting is 1.

This field applies to all Pods in the range 0 to replicas - 1 . If there is any unavailable Pod in the range 0 to replicas - 1 , it will be
counted towards maxUnavailable .

Note:
The maxUnavailable field is in Alpha stage and it is honored only by API servers that are running with the
MaxUnavailableStatefulSet feature gate enabled.

Forced rollback
When using Rolling Updates with the default Pod Management Policy ( OrderedReady ), it's possible to get into a broken state that
requires manual intervention to repair.

If you update the Pod template to a configuration that never becomes Running and Ready (for example, due to a bad binary or
application-level configuration error), StatefulSet will stop the rollout and wait.

In this state, it's not enough to revert the Pod template to a good configuration. Due to a known issue, StatefulSet will continue to
wait for the broken Pod to become Ready (which never happens) before it will attempt to revert it back to the working configuration.

After reverting the template, you must also delete any Pods that StatefulSet had already attempted to run with the bad
configuration. StatefulSet will then begin to recreate the Pods using the reverted template.

PersistentVolumeClaim retention
ⓘ FEATURE STATE: Kubernetes v1.27 [beta]

The optional .spec.persistentVolumeClaimRetentionPolicy field controls if and how PVCs are deleted during the lifecycle of a
StatefulSet. You must enable the StatefulSetAutoDeletePVC feature gate on the API server and the controller manager to use this
field. Once enabled, there are two policies you can configure for each StatefulSet:

whenDeleted

configures the volume retention behavior that applies when the StatefulSet is deleted

whenScaled

configures the volume retention behavior that applies when the replica count of the StatefulSet is reduced; for example, when
scaling down the set.

For each policy that you can configure, you can set the value to either Delete or Retain .

Delete

The PVCs created from the StatefulSet volumeClaimTemplate are deleted for each Pod affected by the policy. With the whenDeleted
policy all PVCs from the volumeClaimTemplate are deleted after their Pods have been deleted. With the whenScaled policy, only PVCs
corresponding to Pod replicas being scaled down are deleted, after their Pods have been deleted.

Retain (default)

PVCs from the volumeClaimTemplate are not affected when their Pod is deleted. This is the behavior before this new feature.

Bear in mind that these policies only apply when Pods are being removed due to the StatefulSet being deleted or scaled down. For
example, if a Pod associated with a StatefulSet fails due to node failure, and the control plane creates a replacement Pod, the
StatefulSet retains the existing PVC. The existing volume is unaffected, and the cluster will attach it to the node where the new Pod is
https://kubernetes.io/docs/concepts/_print/ 158/609
7/10/24, 9:28 AM Concepts | Kubernetes

about to launch.

The default for policies is Retain , matching the StatefulSet behavior before this new feature.

Here is an example policy.

apiVersion: apps/v1
kind: StatefulSet
...
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Delete
...

The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is
terminated. This enables the Pod to cleanly unmount all volumes before the PVCs are deleted (and before the backing PV and
volume are deleted, depending on the retain policy). When you set the whenDeleted policy to Delete , an owner reference to the
StatefulSet instance is placed on all PVCs associated with that StatefulSet.

The whenScaled policy must delete PVCs only when a Pod is scaled down, and not when a Pod is deleted for another reason. When
reconciling, the StatefulSet controller compares its desired replica count to the actual Pods present on the cluster. Any StatefulSet
Pod whose id greater than the replica count is condemned and marked for deletion. If the whenScaled policy is Delete , the
condemned Pods are first set as owners to the associated StatefulSet template PVCs, before the Pod is deleted. This causes the PVCs
to be garbage collected after only the condemned Pods have terminated.

This means that if the controller crashes and restarts, no Pod will be deleted before its owner reference has been updated
appropriate to the policy. If a condemned Pod is force-deleted while the controller is down, the owner reference may or may not
have been set up, depending on when the controller crashed. It may take several reconcile loops to update the owner references, so
some condemned Pods may have set up owner references and others may not. For this reason we recommend waiting for the
controller to come back up, which will verify owner references before terminating Pods. If that is not possible, the operator should
verify the owner references on PVCs to ensure the expected objects are deleted when Pods are force-deleted.

Replicas
.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

Should you manually scale a deployment, example via kubectl scale statefulset statefulset --replicas=X , and then you update
that StatefulSet based on a manifest (for example: by running kubectl apply -f statefulset.yaml ), then applying that manifest
overwrites the manual scaling that you previously did.

If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for a Statefulset, don't set .spec.replicas .
Instead, allow the Kubernetes control plane to manage the .spec.replicas field automatically.

What's next
Learn about Pods.
Find out how to use StatefulSets
Follow an example of deploying a stateful application.
Follow an example of deploying Cassandra with Stateful Sets.
Follow an example of running a replicated stateful application.
Learn how to scale a StatefulSet.
Learn what's involved when you delete a StatefulSet.
Learn how to configure a Pod to use a volume for storage.
Learn how to configure a Pod to use a PersistentVolume for storage.
StatefulSet is a top-level resource in the Kubernetes REST API. Read the StatefulSet object definition to understand the API for
stateful sets.
Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.

https://kubernetes.io/docs/concepts/_print/ 159/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.4 - DaemonSet
A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of
your cluster, such as a networking helper tool, or be part of an add-on.

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As
nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

running a cluster storage daemon on every node


running a logs collection daemon on every node
running a node monitoring daemon on every node

In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use
multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different
hardware types.

Writing a DaemonSet Spec


Create a DaemonSet
You can describe a DaemonSet in a YAML file. For example, the daemonset.yaml file below describes a DaemonSet that runs the
fluentd-elasticsearch Docker image:

controllers/daemonset.yaml

https://kubernetes.io/docs/concepts/_print/ 160/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
# it may be desirable to set a high priority class to ensure that a DaemonSet Pod
# preempts running Pods
# priorityClassName: important
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log

Create a DaemonSet based on the YAML file:

kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml

Required Fields
As with all other Kubernetes config, a DaemonSet needs apiVersion , kind , and metadata fields. For general information about
working with config files, see running stateless applications and object management using kubectl.

The name of a DaemonSet object must be a valid DNS subdomain name.

A DaemonSet also needs a .spec section.

Pod Template
The .spec.template is one of the required fields in .spec .
https://kubernetes.io/docs/concepts/_print/ 161/609
7/10/24, 9:28 AM Concepts | Kubernetes

The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion
or kind .

In addition to required fields for a Pod, a Pod template in a DaemonSet has to specify appropriate labels (see pod selector).

A Pod Template in a DaemonSet must have a RestartPolicy equal to Always , or be unspecified, which defaults to Always .

Pod Selector
The .spec.selector field is a pod selector. It works the same as the .spec.selector of a Job.

You must specify a pod selector that matches the labels of the .spec.template . Also, once a DaemonSet is created, its
.spec.selector can not be mutated. Mutating the pod selector can lead to the unintentional orphaning of Pods, and it was found to
be confusing to users.

The .spec.selector is an object consisting of two fields:

- works the same as the .spec.selector of a ReplicationController.


matchLabels

matchExpressions - allows to build more sophisticated selectors by specifying key, list of values and an operator that relates the
key and values.

When the two are specified the result is ANDed.

The .spec.selector must match the .spec.template.metadata.labels . Config with these two not matching will be rejected by the API.

Running Pods on select Nodes


If you specify a .spec.template.spec.nodeSelector , then the DaemonSet controller will create Pods on nodes which match that node
selector. Likewise if you specify a .spec.template.spec.affinity , then DaemonSet controller will create Pods on nodes which match
that node affinity. If you do not specify either, then the DaemonSet controller will create Pods on all nodes.

How Daemon Pods are scheduled


A DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod. The DaemonSet controller creates a Pod for each
eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host. After the Pod is created, the default
scheduler typically takes over and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod cannot fit
on the node, the default scheduler may preempt (evict) some of the existing Pods based on the priority of the new Pod.

Note:
If it's important that the DaemonSet pod run on each node, it's often desirable to set the .spec.template.spec.priorityClassName
of the DaemonSet to a PriorityClass with a higher priority to ensure that this eviction occurs.

The user can specify a different scheduler for the Pods of the DaemonSet, by setting the .spec.template.spec.schedulerName field of
the DaemonSet.

The original node affinity specified at the .spec.template.spec.affinity.nodeAffinity field (if specified) is taken into consideration by
the DaemonSet controller when evaluating the eligible nodes, but is replaced on the created Pod with the node affinity that matches
the name of the eligible node.

nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- target-host-name

https://kubernetes.io/docs/concepts/_print/ 162/609
7/10/24, 9:28 AM Concepts | Kubernetes

Taints and tolerations


The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:

Toleration key Effect Details

node.kubernetes.io/not-ready NoExecute DaemonSet Pods can be scheduled onto nodes that are not healthy
or ready to accept Pods. Any DaemonSet Pods running on such
nodes will not be evicted.

node.kubernetes.io/unreachable NoExecute DaemonSet Pods can be scheduled onto nodes that are
unreachable from the node controller. Any DaemonSet Pods
running on such nodes will not be evicted.

node.kubernetes.io/disk-pressure NoSchedule DaemonSet Pods can be scheduled onto nodes with disk pressure
issues.

node.kubernetes.io/memory- NoSchedule DaemonSet Pods can be scheduled onto nodes with memory
pressure pressure issues.

node.kubernetes.io/pid-pressure NoSchedule DaemonSet Pods can be scheduled onto nodes with process
pressure issues.

node.kubernetes.io/unschedulable NoSchedule DaemonSet Pods can be scheduled onto nodes that are
unschedulable.

node.kubernetes.io/network- NoSchedule Only added for DaemonSet Pods that request host networking,
unavailable i.e., Pods having spec.hostNetwork: true . Such DaemonSet
Pods can be scheduled onto nodes with unavailable network.

You can add your own tolerations to the Pods of a DaemonSet as well, by defining these in the Pod template of the DaemonSet.

Because the DaemonSet controller sets the node.kubernetes.io/unschedulable:NoSchedule toleration automatically, Kubernetes can
run DaemonSet Pods on nodes that are marked as unschedulable.

If you use a DaemonSet to provide an important node-level function, such as cluster networking, it is helpful that Kubernetes places
DaemonSet Pods on nodes before they are ready. For example, without that special toleration, you could end up in a deadlock
situation where the node is not marked as ready because the network plugin is not running there, and at the same time the network
plugin is not running on that node because the node is not yet ready.

Communicating with Daemon Pods


Some possible patterns for communicating with Pods in a DaemonSet are:

Push: Pods in the DaemonSet are configured to send updates to another service, such as a stats database. They do not have
clients.
NodeIP and Known Port: Pods in the DaemonSet can use a hostPort , so that the pods are reachable via the node IPs. Clients
know the list of node IPs somehow, and know the port by convention.
DNS: Create a headless service with the same pod selector, and then discover DaemonSets using the endpoints resource or
retrieve multiple A records from DNS.
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to
reach specific node.)

Updating a DaemonSet
If node labels are changed, the DaemonSet will promptly add Pods to newly matching nodes and delete Pods from newly not-
matching nodes.

You can modify the Pods that a DaemonSet creates. However, Pods do not allow all fields to be updated. Also, the DaemonSet
controller will use the original template the next time a node (even with the same name) is created.
https://kubernetes.io/docs/concepts/_print/ 163/609
7/10/24, 9:28 AM Concepts | Kubernetes

You can delete a DaemonSet. If you specify --cascade=orphan with kubectl , then the Pods will be left on the nodes. If you
subsequently create a new DaemonSet with the same selector, the new DaemonSet adopts the existing Pods. If any Pods need
replacing the DaemonSet replaces them according to its updateStrategy .

You can perform a rolling update on a DaemonSet.

Alternatives to DaemonSet
Init scripts
It is certainly possible to run daemon processes by directly starting them on a node (e.g. using init , upstartd , or systemd ). This is
perfectly fine. However, there are several advantages to running such processes via a DaemonSet:

Ability to monitor and manage logs for daemons in the same way as applications.
Same config language and tools (e.g. Pod templates, kubectl ) for daemons and applications.
Running daemons in containers with resource limits increases isolation between daemons from app containers. However, this
can also be accomplished by running the daemons in a container but not in a Pod.

Bare Pods
It is possible to create Pods directly which specify a particular node to run on. However, a DaemonSet replaces Pods that are deleted
or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this
reason, you should use a DaemonSet rather than creating individual Pods.

Static Pods
It is possible to create Pods by writing a file to a certain directory watched by Kubelet. These are called static pods. Unlike
DaemonSet, static Pods cannot be managed with kubectl or other Kubernetes API clients. Static Pods do not depend on the
apiserver, making them useful in cluster bootstrapping cases. Also, static Pods may be deprecated in the future.

Deployments
DaemonSets are similar to Deployments in that they both create Pods, and those Pods have processes which are not expected to
terminate (e.g. web servers, storage servers).

Use a Deployment for stateless services, like frontends, where scaling up and down the number of replicas and rolling out updates
are more important than controlling exactly which host the Pod runs on. Use a DaemonSet when it is important that a copy of a Pod
always run on all or certain hosts, if the DaemonSet provides node-level functionality that allows other Pods to run correctly on that
particular node.

For example, network plugins often include a component that runs as a DaemonSet. The DaemonSet component makes sure that
the node where it's running has working cluster networking.

What's next
Learn about Pods.
Learn about static Pods, which are useful for running Kubernetes control plane components.
Find out how to use DaemonSets
Perform a rolling update on a DaemonSet
Perform a rollback on a DaemonSet (for example, if a roll out didn't work how you expected).
Understand how Kubernetes assigns Pods to Nodes.
Learn about device plugins and add ons, which often run as DaemonSets.
DaemonSet is a top-level resource in the Kubernetes REST API. Read the DaemonSet object definition to understand the API for
daemon sets.

https://kubernetes.io/docs/concepts/_print/ 164/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.5 - Jobs
Jobs represent one-off tasks that run to completion and then stop.

A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully
terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful
completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its
active Pods until the Job is resumed again.

A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first
Pod fails or is deleted (for example due to a node hardware failure or a node reboot).

You can also use a Job to run multiple Pods in parallel.

If you want to run a Job (either a single task, or several in parallel) on a schedule, see CronJob.

Running an example Job


Here is an example Job config. It computes π to 2000 places and prints it out. It takes around 10s to complete.

controllers/job.yaml

apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4

You can run the example with this command:

kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml

The output is similar to this:

job.batch/pi created

Check on the status of the Job with kubectl :

kubectl describe job pi kubectl get job pi -o yaml

https://kubernetes.io/docs/concepts/_print/ 165/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: pi
Namespace: default
Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
batch.kubernetes.io/job-name=pi
...
Annotations: batch.kubernetes.io/job-tracking: ""
Parallelism: 1
Completions: 1
Start Time: Mon, 02 Dec 2019 15:20:11 +0200
Completed At: Mon, 02 Dec 2019 15:21:16 +0200
Duration: 65s
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
batch.kubernetes.io/job-name=pi
Containers:
pi:
Image: perl:5.34.0
Port: <none>
Host Port: <none>
Command:
perl
-Mbignum=bpi
-wle
print bpi(2000)
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4
Normal Completed 18s job-controller Job completed

To view completed Pods of a Job, use kubectl get pods .

To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:

pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')


echo $pods

The output is similar to this:

pi-5rwd7

Here, the selector is the same as the selector for the Job. The --output=jsonpath option specifies an expression with the name from
each Pod in the returned list.

View the standard output of one of the pods:

kubectl logs $pods

Another way to view the logs of a Job:

kubectl logs jobs/pi

The output is similar to this:

https://kubernetes.io/docs/concepts/_print/ 166/609
7/10/24, 9:28 AM Concepts | Kubernetes

3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938

Writing a Job spec


As with all other Kubernetes config, a Job needs apiVersion , kind , and metadata fields.

When the control plane creates new Pods for a Job, the .metadata.name of the Job is part of the basis for naming those Pods. The
name of a Job must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostnames. For best
compatibility, the name should follow the more restrictive rules for a DNS label. Even when the name is a DNS subdomain, the name
must be no longer than 63 characters.

A Job also needs a .spec section.

Job Labels
Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid .

Pod Template
The .spec.template is the only required field of the .spec .

The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion
or kind .

In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see pod selector) and an appropriate
restart policy.

Only a RestartPolicy equal to Never or OnFailure is allowed.

Pod selector
The .spec.selector field is optional. In almost all cases you should not specify it. See section specifying your own pod selector.

Parallel execution for Jobs


There are three main types of task suitable to run as a Job:

1. Non-parallel Jobs
normally, only one Pod is started, unless the Pod fails.
the Job is complete as soon as its Pod terminates successfully.
2. Parallel Jobs with a fixed completion count:
specify a non-zero positive value for .spec.completions .
the Job represents the overall task, and is complete when there are .spec.completions successful Pods.
when using .spec.completionMode="Indexed" , each Pod gets a different index in the range 0 to .spec.completions-1 .
3. Parallel Jobs with a work queue:
do not specify .spec.completions , default to .spec.parallelism .
the Pods must coordinate amongst themselves or an external service to determine what each should work on. For
example, a Pod might fetch a batch of up to N items from the work queue.
each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is
done.
when any Pod from the Job terminates with success, no new Pods are created.
once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They
should all be in the process of exiting.

For a non-parallel Job, you can leave both .spec.completions and .spec.parallelism unset. When both are unset, both are defaulted
to 1.

https://kubernetes.io/docs/concepts/_print/ 167/609
7/10/24, 9:28 AM Concepts | Kubernetes

For a fixed completion count Job, you should set .spec.completions to the number of completions needed. You can set
.spec.parallelism , or leave it unset and it will default to 1.

For a work queue Job, you must leave .spec.completions unset, and set .spec.parallelism to a non-negative integer.

For more information about how to make use of the different types of job, see the job patterns section.

Controlling parallelism
The requested parallelism ( .spec.parallelism ) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is
specified as 0, then the Job is effectively paused until it is increased.

Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons:

For fixed completion count Jobs, the actual number of pods running in parallel will not exceed the number of remaining
completions. Higher values of .spec.parallelism are effectively ignored.
For work queue Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete,
however.
If the Job Controller has not had time to react.
If the Job controller failed to create Pods for any reason (lack of ResourceQuota , lack of permission, etc.), then there may be
fewer pods than requested.
The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
When a Pod is gracefully shut down, it takes time to stop.

Completion mode

ⓘ FEATURE STATE: Kubernetes v1.24 [stable]

Jobs with fixed completion count - that is, jobs that have non null .spec.completions - can have a completion mode that is specified in
.spec.completionMode :

NonIndexed(default): the Job is considered complete when there have been .spec.completions successfully completed Pods. In
other words, each Pod completion is homologous to each other. Note that Jobs that have null .spec.completions are implicitly
NonIndexed .

: the Pods of a Job get an associated completion index from 0 to


Indexed .spec.completions-1 . The index is available through
four mechanisms:

The Pod annotation batch.kubernetes.io/job-completion-index .


The Pod label batch.kubernetes.io/job-completion-index (for v1.28 and later). Note the feature gate PodIndexLabel must
be enabled to use this label, and it is enabled by default.
As part of the Pod hostname, following the pattern $(job-name)-$(index) . When you use an Indexed Job in combination
with a Service, Pods within the Job can use the deterministic hostnames to address each other via DNS. For more
information about how to configure this, see Job with Pod-to-Pod Communication.
From the containerized task, in the environment variable JOB_COMPLETION_INDEX .
The Job is considered complete when there is one successfully completed Pod for each index. For more information about how
to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment.

Note:
Although rare, more than one Pod could be started for the same index (due to various reasons such as node failures, kubelet
restarts, or Pod evictions). In this case, only the first Pod that completes successfully will count towards the completion count
and update the status of the Job. The other Pods that are running or completed for the same index will be deleted by the Job
controller once they are detected.

Handling Pod and container failures


A container in a Pod may fail for a number of reasons, such as because the process in it exited with a non-zero exit code, or the
container was killed for exceeding a memory limit, etc. If this happens, and the .spec.template.spec.restartPolicy = "OnFailure" ,
then the Pod stays on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted
https://kubernetes.io/docs/concepts/_print/ 168/609
7/10/24, 9:28 AM Concepts | Kubernetes

locally, or else specify .spec.template.spec.restartPolicy = "Never" . See pod lifecycle for more information on restartPolicy .

An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted,
deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = "Never" . When a Pod fails, then the Job
controller starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In
particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous runs.

By default, each pod failure is counted towards the .spec.backoffLimit limit, see pod backoff failure policy. However, you can
customize handling of pod failures by setting the Job's pod failure policy.

Additionally, you can choose to count the pod failures independently for each index of an Indexed Job by setting the
.spec.backoffLimitPerIndex field (for more information, see backoff limit per index).

Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never" ,
the same program may sometimes be started twice.

If you do specify .spec.parallelism and .spec.completions both greater than 1, then there may be multiple pods running at once.
Therefore, your pods must also be tolerant of concurrency.

When the feature gates PodDisruptionConditions and JobPodFailurePolicy are both enabled, and the .spec.podFailurePolicy field is
set, the Job controller does not consider a terminating Pod (a pod that has a .metadata.deletionTimestamp field set) as a failure until
that Pod is terminal (its .status.phase is Failed or Succeeded ). However, the Job controller creates a replacement Pod as soon as
the termination becomes apparent. Once the pod terminates, the Job controller evaluates .backoffLimit and .podFailurePolicy for
the relevant Job, taking this now-terminated Pod into consideration.

If either of these requirements is not satisfied, the Job controller counts a terminating Pod as an immediate failure, even if that Pod
later terminates with phase: "Succeeded" .

Pod backoff failure policy


There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set
.spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed
Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six
minutes.

The number of retries is calculated in two ways:

The number of Pods with .status.phase = "Failed" .


When using restartPolicy = "OnFailure" , the number of retries in all the containers of Pods with .status.phase equal to
Pending or Running .

If either of the calculations reaches the .spec.backoffLimit , the Job is considered failed.

Note:
If your job has restartPolicy = "OnFailure", keep in mind that your Pod running the Job will be terminated once the job backoff
limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting restartPolicy = "Never"
when debugging the Job or using a logging system to ensure output from failed Jobs is not lost inadvertently.

Backoff limit per index

ⓘ FEATURE STATE: Kubernetes v1.29 [beta]

Note:
You can only configure the backoff limit per index for an Indexed Job, if you have the JobBackoffLimitPerIndex feature gate
enabled in your cluster.

When you run an indexed Job, you can choose to handle retries for pod failures independently for each index. To do so, set the
.spec.backoffLimitPerIndex to specify the maximal number of pod failures per index.

https://kubernetes.io/docs/concepts/_print/ 169/609
7/10/24, 9:28 AM Concepts | Kubernetes

When the per-index backoff limit is exceeded for an index, Kubernetes considers the index as failed and adds it to the
.status.failedIndexes field. The succeeded indexes, those with a successfully executed pods, are recorded in the
.status.completedIndexes field, regardless of whether you set the backoffLimitPerIndex field.

Note that a failing index does not interrupt execution of other indexes. Once all indexes finish for a Job where you specified a
backoff limit per index, if at least one of those indexes did fail, the Job controller marks the overall Job as failed, by setting the Failed
condition in the status. The Job gets marked as failed even if some, potentially nearly all, of the indexes were processed successfully.

You can additionally limit the maximal number of indexes marked failed by setting the .spec.maxFailedIndexes field. When the
number of failed indexes exceeds the maxFailedIndexes field, the Job controller triggers termination of all remaining running Pods
for that Job. Once all pods are terminated, the entire Job is marked failed by the Job controller, by setting the Failed condition in the
Job status.

Here is an example manifest for a Job that defines a backoffLimitPerIndex :

/controllers/job-backoff-limit-per-index-example.yaml

apiVersion: batch/v1
kind: Job
metadata:
name: job-backoff-limit-per-index-example
spec:
completions: 10
parallelism: 3
completionMode: Indexed # required for the feature
backoffLimitPerIndex: 1 # maximal number of failures per index
maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution
template:
spec:
restartPolicy: Never # required for the feature
containers:
- name: example
image: python
command: # The jobs fails as there is at least one failed index
# (all even indexes fail in here), yet all indexes
# are executed as maxFailedIndexes is not exceeded.
- python3
- -c
- |
import os, sys
print("Hello world")
if int(os.environ.get("JOB_COMPLETION_INDEX")) % 2 == 0:
sys.exit(1)

In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes
exceeds 5, then the entire Job is terminated.

Once the job is finished, the Job status looks as follows:

kubectl get -o yaml job job-backoff-limit-per-index-example

https://kubernetes.io/docs/concepts/_print/ 170/609
7/10/24, 9:28 AM Concepts | Kubernetes

status:
completedIndexes: 1,3,5,7,9
failedIndexes: 0,2,4,6,8
succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes
failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes
conditions:
- message: Job has failed indexes
reason: FailedIndexes
status: "True"
type: Failed

Additionally, you may want to use the per-index backoff along with a pod failure policy. When using per-index backoff, there is a new
FailIndex action available which allows you to avoid unnecessary retries within an index.

Pod failure policy

ⓘ FEATURE STATE: Kubernetes v1.26 [beta]

Note:
You can only configure a Pod failure policy for a Job if you have the JobPodFailurePolicy feature gate enabled in your cluster.
Additionally, it is recommended to enable the PodDisruptionConditions feature gate in order to be able to detect and handle Pod
disruption conditions in the Pod failure policy (see also: Pod disruption conditions). Both feature gates are available in
Kubernetes 1.30.

A Pod failure policy, defined with the .spec.podFailurePolicy field, enables your cluster to handle Pod failures based on the
container exit codes and the Pod conditions.

In some situations, you may want to have a better control when handling Pod failures than the control provided by the Pod backoff
failure policy, which is based on the Job's .spec.backoffLimit . These are some examples of use cases:

To optimize costs of running workloads by avoiding unnecessary Pod restarts, you can terminate a Job as soon as one of its
Pods fails with an exit code indicating a software bug.
To guarantee that your Job finishes even if there are disruptions, you can ignore Pod failures caused by disruptions (such as
preemption, API-initiated eviction or taint-based eviction) so that they don't count towards the .spec.backoffLimit limit of
retries.

You can configure a Pod failure policy, in the .spec.podFailurePolicy field, to meet the above use cases. This policy can handle Pod
failures based on the container exit codes and the Pod conditions.

Here is a manifest for a Job that defines a podFailurePolicy :

/controllers/job-pod-failure-policy-example.yaml

https://kubernetes.io/docs/concepts/_print/ 171/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: batch/v1
kind: Job
metadata:
name: job-pod-failure-policy-example
spec:
completions: 12
parallelism: 3
template:
spec:
restartPolicy: Never
containers:
- name: main
image: docker.io/library/bash:5
command: ["bash"] # example command simulating a bug which triggers the FailJob action
args:
- -c
- echo "Hello world!" && sleep 5 && exit 42
backoffLimit: 6
podFailurePolicy:
rules:
- action: FailJob
onExitCodes:
containerName: main # optional
operator: In # one of: In, NotIn
values: [42]
- action: Ignore # one of: Ignore, FailJob, Count
onPodConditions:
- type: DisruptionTarget # indicates Pod disruption

In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails
with the 42 exit code. The following are the rules for the main container specifically:

an exit code of 0 means that the container succeeded


an exit code of 42 means that the entire Job failed
any other exit code represents that the container failed, and hence the entire Pod. The Pod will be re-created if the total
number of restarts is below backoffLimit . If the backoffLimit is reached the entire Job failed.

Note:
Because the Pod template specifies a restartPolicy: Never, the kubelet does not restart the main container in that particular Pod.

The second rule of the Pod failure policy, specifying the Ignore action for failed Pods with condition DisruptionTarget excludes Pod
disruptions from being counted towards the .spec.backoffLimit limit of retries.

Note:
If the Job failed, either by the Pod failure policy or Pod backoff failure policy, and the Job is running multiple Pods, Kubernetes
terminates all the Pods in that Job that are still Pending or Running.

These are some requirements and semantics of the API:

if you want to use a .spec.podFailurePolicy field for a Job, you must also define that Job's pod template with
.spec.restartPolicyset to Never .
the Pod failure policy rules you specify under spec.podFailurePolicy.rules are evaluated in order. Once a rule matches a Pod
failure, the remaining rules are ignored. When no rule matches the Pod failure, the default handling applies.
you may want to restrict a rule to a specific container by specifying its name
in spec.podFailurePolicy.rules[*].onExitCodes.containerName . When not specified the rule applies to all containers. When
specified, it should match one the container or initContainer names in the Pod template.
you may specify the action taken when a Pod failure policy is matched by spec.podFailurePolicy.rules[*].action . Possible
values are:
https://kubernetes.io/docs/concepts/_print/ 172/609
7/10/24, 9:28 AM Concepts | Kubernetes

FailJob : use to indicate that the Pod's job should be marked as Failed and all running Pods should be terminated.
Ignore : use to indicate that the counter towards the .spec.backoffLimit should not be incremented and a replacement
Pod should be created.
Count : use to indicate that the Pod should be handled in the default way. The counter towards the .spec.backoffLimit
should be incremented.
FailIndex : use this action along with backoff limit per index to avoid unnecessary retries within the index of a failed pod.

Note:
When you use a podFailurePolicy, the job controller only matches Pods in the Failed phase. Pods with a deletion timestamp that
are not in a terminal phase (Failed or Succeeded) are considered still terminating. This implies that terminating pods retain a
tracking finalizer until they reach a terminal phase. Since Kubernetes 1.27, Kubelet transitions deleted pods to a terminal phase
(see: Pod Phase). This ensures that deleted pods have their finalizers removed by the Job controller.

Note:
Starting with Kubernetes v1.28, when Pod failure policy is used, the Job controller recreates terminating Pods only once these
Pods reach the terminal Failed phase. This behavior is similar to podReplacementPolicy: Failed. For more information, see Pod
replacement policy.

Success policy
ⓘ FEATURE STATE: Kubernetes v1.30 [alpha]

Note:
You can only configure a success policy for an Indexed Job if you have the JobSuccessPolicy feature gate enabled in your cluster.

When creating an Indexed Job, you can define when a Job can be declared as succeeded using a .spec.successPolicy , based on the
pods that succeeded.

By default, a Job succeeds when the number of succeeded Pods equals .spec.completions . These are some situations where you
might want additional control for declaring a Job succeeded:

When running simulations with different parameters, you might not need all the simulations to succeed for the overall Job to
be successful.
When following a leader-worker pattern, only the success of the leader determines the success or failure of a Job. Examples of
this are frameworks like MPI and PyTorch etc.

You can configure a success policy, in the .spec.successPolicy field, to meet the above use cases. This policy can handle Job success
based on the succeeded pods. After the Job meets the success policy, the job controller terminates the lingering Pods. A success
policy is defined by rules. Each rule can take one of the following forms:

When you specify the succeededIndexes only, once all indexes specified in the succeededIndexes succeed, the job controller
marks the Job as succeeded. The succeededIndexes must be a list of intervals between 0 and .spec.completions-1 .
When you specify the succeededCount only, once the number of succeeded indexes reaches the succeededCount , the job
controller marks the Job as succeeded.
When you specify both succeededIndexes and succeededCount , once the number of succeeded indexes from the subset of
indexes specified in the succeededIndexes reaches the succeededCount , the job controller marks the Job as succeeded.

Note that when you specify multiple rules in the .spec.successPolicy.rules , the job controller evaluates the rules in order. Once the
Job meets a rule, the job controller ignores remaining rules.

Here is a manifest for a Job with successPolicy :

/controllers/job-success-policy.yaml

https://kubernetes.io/docs/concepts/_print/ 173/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: batch/v1
kind: Job
metadata:
name: job-success
spec:
parallelism: 10
completions: 10
completionMode: Indexed # Required for the success policy
successPolicy:
rules:
- succeededIndexes: 0,2-3
succeededCount: 1
template:
spec:
containers:
- name: main
image: python
command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,
# the overall Job is a success.
- python3
- -c
- |
import os, sys
if os.environ.get("JOB_COMPLETION_INDEX") == "2":
sys.exit(0)
else:
sys.exit(1)
restartPolicy: Never

In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the
Job as succeeded and terminate the lingering Pods when either of the specified indexes, 0, 2, or 3, succeed. The Job that meets the
success policy gets the SuccessCriteriaMet condition. After the removal of the lingering Pods is issued, the Job gets the Complete
condition.

Note that the succeededIndexes is represented as intervals separated by a hyphen. The number are listed in represented by the first
and last element of the series, separated by a hyphen.

Note:
When you specify both a success policy and some terminating policies such as .spec.backoffLimit and .spec.podFailurePolicy,
once the Job meets either policy, the job controller respects the terminating policy and ignores the success policy.

Job termination and cleanup


When a Job completes, no more Pods are created, but the Pods are usually not deleted either. Keeping them around allows you to
still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is
completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl
(e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml ). When you delete the job using kubectl , all the pods it created are
deleted too.

By default, a Job will run uninterrupted unless a Pod fails ( restartPolicy=Never ) or a Container exits in error
( restartPolicy=OnFailure ), at which point the Job defers to the .spec.backoffLimit described above. Once .spec.backoffLimit has
been reached the Job will be marked as failed and any running Pods will be terminated.

Another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to
a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a
Job reaches activeDeadlineSeconds , all of its running Pods are terminated and the Job status will become type: Failed with reason:
DeadlineExceeded .

https://kubernetes.io/docs/concepts/_print/ 174/609
7/10/24, 9:28 AM Concepts | Kubernetes

Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit . Therefore, a Job that is retrying one or
more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds , even if the
backoffLimit is not yet reached.

Example:

apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-timeout
spec:
backoffLimit: 5
activeDeadlineSeconds: 100
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this
field at the proper level.

Keep in mind that the restartPolicy applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job
status is type: Failed . That is, the Job termination mechanisms activated with .spec.activeDeadlineSeconds and .spec.backoffLimit
result in a permanent Job failure that requires manual intervention to resolve.

Clean up finished jobs automatically


Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If
the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the
specified capacity-based cleanup policy.

TTL mechanism for finished Jobs

ⓘ FEATURE STATE: Kubernetes v1.23 [stable]

Another way to clean up finished Jobs (either Complete or Failed ) automatically is to use a TTL mechanism provided by a TTL
controller for finished resources, by specifying the .spec.ttlSecondsAfterFinished field of the Job.

When the TTL controller cleans up the Job, it will delete the Job cascadingly, i.e. delete its dependent objects, such as Pods, together
with the Job. Note that when the Job is deleted, its lifecycle guarantees, such as finalizers, will be honored.

For example:

apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

https://kubernetes.io/docs/concepts/_print/ 175/609
7/10/24, 9:28 AM Concepts | Kubernetes

The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.

If the field is set to 0 , the Job will be eligible to be automatically deleted immediately after it finishes. If the field is unset, this Job
won't be cleaned up by the TTL controller after it finishes.

Note:
It is recommended to set ttlSecondsAfterFinished field because unmanaged jobs (Jobs that you created directly, and not
indirectly through other workload APIs such as CronJob) have a default deletion policy of orphanDependents causing Pods created
by an unmanaged Job to be left around after that Job is fully deleted. Even though the control plane eventually garbage collects
the Pods from a deleted Job after they either fail or complete, sometimes those lingering pods may cause cluster performance
degradation or in worst case cause the cluster to go offline due to this degradation.

You can use LimitRanges and ResourceQuotas to place a cap on the amount of resources that a particular namespace can
consume.

Job patterns
The Job object can be used to process a set of independent but related work items. These might be emails to be sent, frames to be
rendered, files to be transcoded, ranges of keys in a NoSQL database to scan, and so on.

In a complex system, there may be multiple different sets of work items. Here we are just considering one set of work items that the
user wants to manage together — a batch job.

There are several different patterns for parallel computation, each with strengths and weaknesses. The tradeoffs are:

One Job object for each work item, versus a single Job object for all work items. One Job per work item creates some overhead
for the user and for the system to manage large numbers of Job objects. A single Job for all work items is better for large
numbers of items.
Number of Pods created equals number of work items, versus each Pod can process multiple work items. When the number of
Pods equals the number of work items, the Pods typically requires less modification to existing code and containers. Having
each Pod process multiple work items is better for large numbers of items.
Several approaches use a work queue. This requires running a queue service, and modifications to the existing program or
container to make it use the work queue. Other approaches are easier to adapt to an existing containerised application.
When the Job is associated with a headless Service, you can enable the Pods within a Job to communicate with each other to
collaborate in a computation.

The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. The pattern names are also links to
examples and more detailed description.

Pattern Single Job object Fewer pods than work items? Use app unmodified?

Queue with Pod Per Work Item ✓ sometimes

Queue with Variable Pod Count ✓ ✓

Indexed Job with Static Work Assignment ✓ ✓

Job with Pod-to-Pod Communication ✓ sometimes sometimes

Job Template Expansion ✓

When you specify completions with .spec.completions , each Pod created by the Job controller has an identical spec . This means
that all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same
environment variables. These patterns are different ways to arrange for pods to work on different things.

This table shows the required settings for .spec.parallelism and .spec.completions for each of the patterns. Here, W is the number
of work items.

https://kubernetes.io/docs/concepts/_print/ 176/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pattern .spec.completions .spec.parallelism

Queue with Pod Per Work Item W any

Queue with Variable Pod Count null any

Indexed Job with Static Work Assignment W any

Job with Pod-to-Pod Communication W W

Job Template Expansion 1 should be 1

Advanced usage
Suspending a Job

ⓘ FEATURE STATE: Kubernetes v1.24 [stable]

When a Job is created, the Job controller will immediately begin creating Pods to satisfy the Job's requirements and will continue to
do so until the Job is complete. However, you may want to temporarily suspend a Job's execution and resume it later, or start Jobs in
suspended state and have a custom controller decide later when to start them.

To suspend a Job, you can update the .spec.suspend field of the Job to true; later, when you want to resume it again, update it to
false. Creating a Job with .spec.suspend set to true will create it in the suspended state.

When a Job is resumed from suspension, its .status.startTime field will be reset to the current time. This means that the
.spec.activeDeadlineSeconds timer will be stopped and reset when a Job is suspended and resumed.

When you suspend a Job, any running Pods that don't have a status of Completed will be terminated with a SIGTERM signal. The
Pod's graceful termination period will be honored and your Pod must handle this signal in this period. This may involve saving
progress for later or undoing changes. Pods terminated this way will not count towards the Job's completions count.

An example Job definition in the suspended state can be like so:

kubectl get job myjob -o yaml

apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
suspend: true
parallelism: 1
completions: 5
template:
spec:
...

You can also toggle Job suspension by patching the Job using the command line.

Suspend an active Job:

kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":true}}'

Resume a suspended Job:


https://kubernetes.io/docs/concepts/_print/ 177/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":false}}'

The Job's status can be used to determine if a Job is suspended or has been suspended in the past:

kubectl get jobs/myjob -o yaml

apiVersion: batch/v1
kind: Job
# .metadata and .spec omitted
status:
conditions:
- lastProbeTime: "2021-02-05T13:14:33Z"
lastTransitionTime: "2021-02-05T13:14:33Z"
status: "True"
type: Suspended
startTime: "2021-02-05T13:13:48Z"

The Job condition of type "Suspended" with status "True" means the Job is suspended; the lastTransitionTime field can be used to
determine how long the Job has been suspended for. If the status of that condition is "False", then the Job was previously suspended
and is now running. If such a condition does not exist in the Job's status, the Job has never been stopped.

Events are also created when the Job is suspended and resumed:

kubectl describe jobs/myjob

Name: myjob
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl
Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl
Normal Suspended 11m job-controller Job suspended
Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44
Normal Resumed 3s job-controller Job resumed

The last four events, particularly the "Suspended" and "Resumed" events, are directly a result of toggling the .spec.suspend field. In
the time between these two events, we see that no Pods were created, but Pod creation restarted as soon as the Job was resumed.

Mutable Scheduling Directives

ⓘ FEATURE STATE: Kubernetes v1.27 [stable]

In most cases, a parallel job will want the pods to run with constraints, like all in the same zone, or all either on GPU model x or y but
not a mix of both.

The suspend field is the first step towards achieving those semantics. Suspend allows a custom queue controller to decide when a
job should start; However, once a job is unsuspended, a custom queue controller has no influence on where the pods of a job will
actually land.

This feature allows updating a Job's scheduling directives before it starts, which gives custom queue controllers the ability to
influence pod placement while at the same time offloading actual pod-to-node assignment to kube-scheduler. This is allowed only
for suspended Jobs that have never been unsuspended before.

https://kubernetes.io/docs/concepts/_print/ 178/609
7/10/24, 9:28 AM Concepts | Kubernetes

The fields in a Job's pod template that can be updated are node affinity, node selector, tolerations, labels, annotations and
scheduling gates.

Specifying your own Pod selector


Normally, when you create a Job object, you do not specify .spec.selector . The system defaulting logic adds this field when the Job
is created. It picks a selector value that will not overlap with any other jobs.

However, in some cases, you might need to override this automatically set selector. To do this, you can specify the .spec.selector of
the Job.

Be very careful when doing this. If you specify a label selector which is not unique to the pods of that Job, and which matches
unrelated Pods, then pods of the unrelated job may be deleted, or this Job may count other Pods as completing it, or one or both
Jobs may refuse to create Pods or run to completion. If a non-unique selector is chosen, then other controllers (e.g.
ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake
when specifying .spec.selector .

Here is an example of a case when you might want to use this feature.

Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a
different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable.
Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=orphan . Before deleting it, you
make a note of what selector it uses:

kubectl get job old -o yaml

The output is similar to this:

kind: Job
metadata:
name: old
...
spec:
selector:
matchLabels:
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...

Then you create a new Job with name and you explicitly specify the same selector. Since the existing Pods have label
new
batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002 , they are controlled by Job new as well.

You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for
you automatically.

kind: Job
metadata:
name: new
...
spec:
manualSelector: true
selector:
matchLabels:
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...

The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002 . Setting manualSelector: true tells the system
that you know what you are doing and to allow this mismatch.

https://kubernetes.io/docs/concepts/_print/ 179/609
7/10/24, 9:28 AM Concepts | Kubernetes

Job tracking with finalizers

ⓘ FEATURE STATE: Kubernetes v1.26 [stable]

The control plane keeps track of the Pods that belong to any Job and notices if any such Pod is removed from the API server. To do
that, the Job controller creates Pods with the finalizer batch.kubernetes.io/job-tracking . The controller removes the finalizer only
after the Pod has been accounted for in the Job status, allowing the Pod to be removed by other controllers or users.

Note:
See My pod stays terminating if you observe that pods from a Job are stuck with the tracking finalizer.

Elastic Indexed Jobs

ⓘ FEATURE STATE: Kubernetes v1.27 [beta]

You can scale Indexed Jobs up or down by mutating both .spec.parallelism and .spec.completions together such that
.spec.parallelism == .spec.completions . When the ElasticIndexedJob feature gate on the API server is disabled, .spec.completions is
immutable.

Use cases for elastic Indexed Jobs include batch workloads which require scaling an indexed Job, such as MPI, Horovord, Ray, and
PyTorch training jobs.

Delayed creation of replacement pods

ⓘ FEATURE STATE: Kubernetes v1.29 [beta]

Note:
You can only set podReplacementPolicy on Jobs if you enable the JobPodReplacementPolicy feature gate (enabled by default).

By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp). This means that,
at a given time, when some of the Pods are terminating, the number of running Pods for a Job can be greater than parallelism or
greater than one Pod per index (if you are using an Indexed Job).

You may choose to create replacement Pods only when the terminating Pod is fully terminal (has status.phase: Failed ). To do this,
set the .spec.podReplacementPolicy: Failed . The default replacement policy depends on whether the Job has a podFailurePolicy set.
With no Pod failure policy defined for a Job, omitting the podReplacementPolicy field selects the TerminatingOrFailed replacement
policy: the control plane creates replacement Pods immediately upon Pod deletion (as soon as the control plane sees that a Pod for
this Job has deletionTimestamp set). For Jobs with a Pod failure policy set, the default podReplacementPolicy is Failed , and no other
value is permitted. See Pod failure policy to learn more about Pod failure policies for Jobs.

kind: Job
metadata:
name: new
...
spec:
podReplacementPolicy: Failed
...

Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is
the number of Pods owned by the Job that are currently terminating.

kubectl get jobs/myjob -o yaml

https://kubernetes.io/docs/concepts/_print/ 180/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: batch/v1
kind: Job
# .metadata and .spec omitted
status:
terminating: 3 # three Pods are terminating and have not yet reached the Failed phase

Delegation of managing a Job object to external controller

ⓘ FEATURE STATE: Kubernetes v1.30 [alpha]

Note:
You can only set the managedBy field on Jobs if you enable the JobManagedBy feature gate (disabled by default).

This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external
controller.

You indicate the controller that reconciles the Job by setting a custom value for the spec.managedBy field - any value other than
kubernetes.io/job-controller . The value of the field is immutable.

Note:
When using this feature, make sure the controller indicated by the field is installed, otherwise the Job may not be reconciled at
all.

Note:
When developing an external Job controller be aware that your controller needs to operate in a fashion conformant with the
definitions of the API spec and status fields of the Job object.

Please review these in detail in the Job API. We also recommend that you run the e2e conformance tests for the Job object to
verify your implementation.

Finally, when developing an external Job controller make sure it does not use the batch.kubernetes.io/job-tracking finalizer,
reserved for the built-in controller.

Warning:
If you are considering to disable the JobManagedBy feature gate, or to downgrade the cluster to a version without the feature gate
enabled, check if there are jobs with a custom value of the spec.managedBy field. If there are such jobs, there is a risk that they
might be reconciled by two controllers after the operation: the built-in Job controller and the external controller indicated by the
field value.

Alternatives
Bare Pods
When the node that a Pod is running on reboots or fails, the pod is terminated and will not be restarted. However, a Job will create
new Pods to replace terminated ones. For this reason, we recommend that you use a Job rather than a bare Pod, even if your
application requires only a single Pod.

Replication Controller
Jobs are complementary to Replication Controllers. A Replication Controller manages Pods which are not expected to terminate (e.g.
web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks).

https://kubernetes.io/docs/concepts/_print/ 181/609
7/10/24, 9:28 AM Concepts | Kubernetes

As discussed in Pod Lifecycle, Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never . (Note: If
RestartPolicy is not set, the default value is Always .)

Single Job starts controller Pod


Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort of custom controller for those
Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with
Kubernetes.

One example of this pattern would be a Job which starts a Pod which runs a script that in turn starts a Spark master controller (see
spark example), runs a spark driver, and then cleans up.

An advantage of this approach is that the overall process gets the completion guarantee of a Job object, but maintains complete
control over what Pods are created and how work is assigned to them.

What's next
Learn about Pods.
Read about different ways of running Jobs:
Coarse Parallel Processing Using a Work Queue
Fine Parallel Processing Using a Work Queue
Use an indexed Job for parallel processing with static work assignment
Create multiple Jobs based on a template: Parallel Processing using Expansions
Follow the links within Clean up finished jobs automatically to learn more about how your cluster can clean up completed and /
or failed tasks.
Job is part of the Kubernetes REST API. Read the Job object definition to understand the API for jobs.

Read about CronJob, which you can use to define a series of Jobs that will run based on a schedule, similar to the UNIX tool
cron .

Practice how to configure handling of retriable and non-retriable pod failures using podFailurePolicy , based on the step-by-
step examples.

https://kubernetes.io/docs/concepts/_print/ 182/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.6 - Automatic Cleanup for Finished Jobs


A time-to-live mechanism to clean up old Jobs that have finished execution.

ⓘ FEATURE STATE: Kubernetes v1.23 [stable]

When your Job has finished, it's useful to keep that Job in the API (and not immediately delete the Job) so that you can tell whether
the Job succeeded or failed.

Kubernetes' TTL-after-finished controller provides a TTL (time to live) mechanism to limit the lifetime of Job objects that have finished
execution.

Cleanup for finished Jobs


The TTL-after-finished controller is only supported for Jobs. You can use this mechanism to clean up finished Jobs (either Complete
or Failed ) automatically by specifying the .spec.ttlSecondsAfterFinished field of a Job, as in this example.

The TTL-after-finished controller assumes that a Job is eligible to be cleaned up TTL seconds after the Job has finished. The timer
starts once the status condition of the Job changes to show that the Job is either Complete or Failed ; once the TTL has expired, that
Job becomes eligible for cascading removal. When the TTL-after-finished controller cleans up a job, it will delete it cascadingly, that is
to say it will delete its dependent objects together with it.

Kubernetes honors object lifecycle guarantees on the Job, such as waiting for finalizers.

You can set the TTL seconds at any time. Here are some examples for setting the .spec.ttlSecondsAfterFinished field of a Job:

Specify this field in the Job manifest, so that a Job can be cleaned up automatically some time after it finishes.
Manually set this field of existing, already finished Jobs, so that they become eligible for cleanup.
Use a mutating admission webhook to set this field dynamically at Job creation time. Cluster administrators can use this to
enforce a TTL policy for finished jobs.
Use a mutating admission webhook to set this field dynamically after the Job has finished, and choose different TTL values
based on job status, labels. For this case, the webhook needs to detect changes to the .status of the Job and only set a TTL
when the Job is being marked as completed.
Write your own controller to manage the cleanup TTL for Jobs that match a particular selector-selector.

Caveats
Updating TTL for finished Jobs
You can modify the TTL period, e.g. .spec.ttlSecondsAfterFinished field of Jobs, after the job is created or has finished. If you extend
the TTL period after the existing ttlSecondsAfterFinished period has expired, Kubernetes doesn't guarantee to retain that Job, even
if an update to extend the TTL returns a successful API response.

Time skew
Because the TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to determine whether the TTL has expired
or not, this feature is sensitive to time skew in your cluster, which may cause the control plane to clean up Job objects at the wrong
time.

Clocks aren't always correct, but the difference should be very small. Please be aware of this risk when setting a non-zero TTL.

What's next
Read Clean up Jobs automatically

Refer to the Kubernetes Enhancement Proposal (KEP) for adding this mechanism.

https://kubernetes.io/docs/concepts/_print/ 183/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.7 - CronJob
A CronJob starts one-time Jobs on a repeating schedule.

ⓘ FEATURE STATE: Kubernetes v1.21 [stable]

A CronJob creates Jobs on a repeating schedule.

CronJob is meant for performing regular scheduled actions such as backups, report generation, and so on. One CronJob object is like
one line of a crontab (cron table) file on a Unix system. It runs a Job periodically on a given schedule, written in Cron format.

CronJobs have limitations and idiosyncrasies. For example, in certain circumstances, a single CronJob can create multiple concurrent
Jobs. See the limitations below.

When the control plane creates new Jobs and (indirectly) Pods for a CronJob, the .metadata.name of the CronJob is part of the basis
for naming those Pods. The name of a CronJob must be a valid DNS subdomain value, but this can produce unexpected results for
the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label. Even when the name is
a DNS subdomain, the name must be no longer than 52 characters. This is because the CronJob controller will automatically append
11 characters to the name you provide and there is a constraint that the length of a Job name is no more than 63 characters.

Example
This example CronJob manifest prints the current time and a hello message every minute:

application/job/cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

(Running Automated Tasks with a CronJob takes you through this example in more detail).

Writing a CronJob spec


Schedule syntax
The .spec.schedule field is required. The value of that field follows the Cron syntax:

https://kubernetes.io/docs/concepts/_print/ 184/609
7/10/24, 9:28 AM Concepts | Kubernetes

# ┌───────────── minute (0 - 59)


# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
# │ │ │ │ │
# │ │ │ │ │
# * * * * *

For example, 0 0 13 * 5 states that the task must be started every Friday at midnight, as well as on the 13th of each month at
midnight.

The format also includes extended "Vixie cron" step values. As explained in the FreeBSD manual:

Step values can be used in conjunction with ranges. Following a range with /<number> specifies skips of the number's value
through the range. For example, 0-23/2 can be used in the hours field to specify command execution every other hour (the
alternative in the V7 standard is 0,2,4,6,8,10,12,14,16,18,20,22 ). Steps are also permitted after an asterisk, so if you want to say
"every two hours", just use */2 .

Note:
A question mark (?) in the schedule has the same meaning as an asterisk *, that is, it stands for any of available value for a given
field.

Other than the standard syntax, some macros like @monthly can also be used:

Entry Description Equivalent to

@yearly (or @annually) Run once a year at midnight of 1 January 0011*

@monthly Run once a month at midnight of the first day of the month 001**

@weekly Run once a week at midnight on Sunday morning 00**0

@daily (or @midnight) Run once a day at midnight 00***

@hourly Run once an hour at the beginning of the hour 0****

To generate CronJob schedule expressions, you can also use web tools like crontab.guru.

Job template
The .spec.jobTemplate defines a template for the Jobs that the CronJob creates, and it is required. It has exactly the same schema as
a Job, except that it is nested and does not have an apiVersion or kind . You can specify common metadata for the templated Jobs,
such as labels or annotations. For information about writing a Job .spec , see Writing a Job Spec.

Deadline for delayed Job start


The .spec.startingDeadlineSeconds field is optional. This field defines a deadline (in whole seconds) for starting the Job, if that Job
misses its scheduled time for any reason.

After missing the deadline, the CronJob skips that instance of the Job (future occurrences are still scheduled). For example, if you
have a backup Job that runs twice a day, you might allow it to start up to 8 hours late, but no later, because a backup taken any later
wouldn't be useful: you would instead prefer to wait for the next scheduled run.

For Jobs that miss their configured deadline, Kubernetes treats them as failed Jobs. If you don't specify startingDeadlineSeconds for a
CronJob, the Job occurrences have no deadline.

If the .spec.startingDeadlineSeconds field is set (not null), the CronJob controller measures the time between when a Job is expected
to be created and now. If the difference is higher than that limit, it will skip this execution.

https://kubernetes.io/docs/concepts/_print/ 185/609
7/10/24, 9:28 AM Concepts | Kubernetes

For example, if it is set to 200 , it allows a Job to be created for up to 200 seconds after the actual schedule.

Concurrency policy
The .spec.concurrencyPolicy field is also optional. It specifies how to treat concurrent executions of a Job that is created by this
CronJob. The spec may specify only one of the following concurrency policies:

(default): The CronJob allows concurrently running Jobs


Allow

Forbid : The CronJob does not allow concurrent runs; if it is time for a new Job run and the previous Job run hasn't finished yet,
the CronJob skips the new Job run. Also note that when the previous Job run finishes, .spec.startingDeadlineSeconds is still
taken into account and may result in a new Job run.
Replace : If it is time for a new Job run and the previous Job run hasn't finished yet, the CronJob replaces the currently running
Job run with a new Job run

Note that concurrency policy only applies to the Jobs created by the same CronJob. If there are multiple CronJobs, their respective
Jobs are always allowed to run concurrently.

Schedule suspension
You can suspend execution of Jobs for a CronJob, by setting the optional .spec.suspend field to true. The field defaults to false.

This setting does not affect Jobs that the CronJob has already started.

If you do set that field to true, all subsequent executions are suspended (they remain scheduled, but the CronJob controller does not
start the Jobs to run the tasks) until you unsuspend the CronJob.

Caution:
Executions that are suspended during their scheduled time count as missed Jobs. When .spec.suspend changes from true to
false on an existing CronJob without a starting deadline, the missed Jobs are scheduled immediately.

Jobs history limits


The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields specify how many completed and failed Jobs should
be kept. Both fields are optional.

.spec.successfulJobsHistoryLimit : This field specifies the number of successful finished jobs to keep. The default value is 3 .
Setting this field to 0 will not keep any successful jobs.

.spec.failedJobsHistoryLimit : This field specifies the number of failed finished jobs to keep. The default value is 1 . Setting this
field to 0 will not keep any failed jobs.

For another way to clean up Jobs automatically, see Clean up finished Jobs automatically.

Time zones

ⓘ FEATURE STATE: Kubernetes v1.27 [stable]

For CronJobs with no time zone specified, the kube-controller-manager interprets schedules relative to its local time zone.

You can specify a time zone for a CronJob by setting .spec.timeZone to the name of a valid time zone. For example, setting
.spec.timeZone: "Etc/UTC" instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time.

A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is
not available on the system.

CronJob limitations
Unsupported TimeZone specification
Specifying a timezone using CRON_TZ or TZ variables inside .spec.schedule is not officially supported (and never has been).

https://kubernetes.io/docs/concepts/_print/ 186/609
7/10/24, 9:28 AM Concepts | Kubernetes

Starting with Kubernetes 1.29 if you try to set a schedule that includes TZ or CRON_TZ timezone specification, Kubernetes will fail to
create the resource with a validation error. Updates to CronJobs already using TZ or CRON_TZ will continue to report a warning to
the client.

Modifying a CronJob
By design, a CronJob contains a template for new Jobs. If you modify an existing CronJob, the changes you make will apply to new
Jobs that start to run after your modification is complete. Jobs (and their Pods) that have already started continue to run without
changes. That is, the CronJob does not update existing Jobs, even if those remain running.

Job creation
A CronJob creates a Job object approximately once per execution time of its schedule. The scheduling is approximate because there
are certain circumstances where two Jobs might be created, or no Job might be created. Kubernetes tries to avoid those situations,
but does not completely prevent them. Therefore, the Jobs that you define should be idempotent.

If startingDeadlineSeconds is set to a large value or left unset (the default) and if concurrencyPolicy is set to Allow , the Jobs will
always run at least once.

Caution:
If startingDeadlineSeconds is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob
controller checks things every 10 seconds.

For every CronJob, the CronJob Controller checks how many schedules it missed in the duration from its last scheduled time until
now. If there are more than 100 missed schedules, then it does not start the Job and logs the error.

Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds

It is important to note that if the startingDeadlineSeconds field is set (not nil ), the controller counts how many missed Jobs
occurred from the value of startingDeadlineSeconds until now rather than from the last scheduled time until now. For example, if
startingDeadlineSeconds is 200 , the controller counts how many missed Jobs occurred in the last 200 seconds.

A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, if concurrencyPolicy is set to Forbid
and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.

For example, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00 , and its
startingDeadlineSeconds field is not set. If the CronJob controller happens to be down from 08:29:00 to 10:21:00 , the Job will not
start as the number of missed Jobs which missed their schedule is greater than 100.

To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00 , and its
startingDeadlineSeconds is set to 200 seconds. If the CronJob controller happens to be down for the same period as the previous
example ( 08:29:00 to 10:21:00 ,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed
schedules happened in the last 200 seconds (i.e., 3 missed schedules), rather than from the last scheduled time until now.

The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of
the Pods it represents.

What's next
Learn about Pods and Jobs, two concepts that CronJobs rely upon.
Read about the detailed format of CronJob .spec.schedule fields.
For instructions on creating and working with CronJobs, and for an example of a CronJob manifest, see Running automated
tasks with CronJobs.
CronJob is part of the Kubernetes REST API. Read the CronJob API reference for more details.

https://kubernetes.io/docs/concepts/_print/ 187/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.2.8 - ReplicationController
Legacy API for managing workloads that can scale horizontally. Superseded by the Deployment and ReplicaSet
APIs.

Note:
A Deployment that configures a ReplicaSet is now the recommended way to set up replication.

A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a
ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.

How a ReplicationController works


If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts
more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are
deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade.
For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is
similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises
multiple pods across multiple nodes.

ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in kubectl commands.

A simple case is to create one ReplicationController object to reliably run one instance of a Pod indefinitely. A more complex use
case is to run several identical replicas of a replicated service, such as web servers.

Running an example ReplicationController


This example ReplicationController config runs three copies of the nginx web server.

controllers/replication.yaml

apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

Run the example job by downloading the example file and then running this command:

https://kubernetes.io/docs/concepts/_print/ 188/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl apply -f https://k8s.io/examples/controllers/replication.yaml

The output is similar to this:

replicationcontroller/nginx created

Check on the status of the ReplicationController using this command:

kubectl describe replicationcontrollers/nginx

The output is similar to this:

Name: nginx
Namespace: default
Selector: app=nginx
Labels: app=nginx
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- ---- ------ -------
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod

Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command
may show:

Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed

To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:

pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})


echo $pods

The output is similar to this:

nginx-3ntk0 nginx-4ok8v nginx-qrm3m

Here, the selector is the same as the selector for the ReplicationController (seen in the kubectl describe output), and in a different
form in replication.yaml . The --output=jsonpath option specifies an expression with the name from each pod in the returned list.

https://kubernetes.io/docs/concepts/_print/ 189/609
7/10/24, 9:28 AM Concepts | Kubernetes

Writing a ReplicationController Manifest


As with all other Kubernetes config, a ReplicationController needs apiVersion , kind , and metadata fields.

When the control plane creates new Pods for a ReplicationController, the .metadata.name of the ReplicationController is part of the
basis for naming those Pods. The name of a ReplicationController must be a valid DNS subdomain value, but this can produce
unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.

For general information about working with configuration files, see object management.

A ReplicationController also needs a .spec section.

Pod Template
The .spec.template is the only required field of the .spec .

The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion
or kind .

In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels and an appropriate
restart policy. For labels, make sure not to overlap with other controllers. See pod selector.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

For local container restarts, ReplicationControllers delegate to an agent on the node, for example the Kubelet.

Labels on the ReplicationController


The ReplicationController can itself have labels ( .metadata.labels ). Typically, you would set these the same as the
.spec.template.metadata.labels ; if .metadata.labels is not specified then it defaults to .spec.template.metadata.labels . However,
they are allowed to be different, and the .metadata.labels do not affect the behavior of the ReplicationController.

Pod Selector
The .spec.selector field is a label selector. A ReplicationController manages all the pods with labels that match the selector. It does
not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the
ReplicationController to be replaced without affecting the running pods.

If specified, the .spec.template.metadata.labels must be equal to the .spec.selector , or it will be rejected by the API. If
.spec.selector is unspecified, it will be defaulted to .spec.template.metadata.labels .

Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or
with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not
stop you from doing this.

If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself (see below).

Multiple Replicas
You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have
running concurrently. The number running at any time may be higher or lower, such as if the replicas were just increased or
decreased, or if a pod is gracefully shutdown, and a replacement starts early.

If you do not specify .spec.replicas , then it defaults to 1.

Working with ReplicationControllers


Deleting a ReplicationController and its Pods
To delete a ReplicationController and all its pods, use kubectl delete . Kubectl will scale the ReplicationController to zero and wait
for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted.

When using the REST API or client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete
the ReplicationController).

https://kubernetes.io/docs/concepts/_print/ 190/609
7/10/24, 9:28 AM Concepts | Kubernetes

Deleting only a ReplicationController


You can delete a ReplicationController without affecting any of its pods.

Using kubectl, specify the --cascade=orphan option to kubectl delete .

When using the REST API or client library, you can delete the ReplicationController object.

Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new .spec.selector are
the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new,
different pod template. To update pods to a new spec in a controlled way, use a rolling update.

Isolating pods from a ReplicationController


Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods
from service for debugging and data recovery. Pods that are removed in this way will be replaced automatically (assuming that the
number of replicas is not also changed).

Common usage patterns


Rescheduling
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the
specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another
control agent).

Scaling
The ReplicationController enables scaling the number of replicas up or down, either manually or by an auto-scaling control agent, by
updating the replicas field.

Rolling updates
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.

As explained in #1353, the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old
(-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods
regardless of unexpected failures.

Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of
pods were productively serving at any given time.

The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the
primary container of the pod, since it is typically image updates that motivate rolling updates.

Multiple release tracks


In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases
for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.

For instance, a service might target all pods with tier in (frontend), environment in (prod) . Now say you have 10 replicated pods
that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController
with replicas set to 9 for the bulk of the replicas, with labels tier=frontend, environment=prod, track=stable , and another
ReplicationController with replicas set to 1 for the canary, with labels tier=frontend, environment=prod, track=canary . Now the
service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things
out, monitor the results, etc.

Using ReplicationControllers with Services


Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some
goes to the new version.

https://kubernetes.io/docs/concepts/_print/ 191/609
7/10/24, 9:28 AM Concepts | Kubernetes

A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be
composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created
and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services
themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.

Writing programs for Replication


Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may
become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used
to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work
assignment mechanisms, such as the RabbitMQ work queues, as opposed to static/one-time customization of the configuration of
each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for
example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.

Responsibilities of the ReplicationController


The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only
terminated pods are excluded from its count. In the future, readiness and other information available from the system may be taken
into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external
clients to implement arbitrarily sophisticated replacement and/or scale-down policies.

The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes.
Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in #492), which would
change its replicas field. We will not add scheduling policies (for example, spreading) to the ReplicationController. Nor should it
verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated
processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere.
We even plan to factor out the mechanism for bulk pod creation (#170).

The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be
built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently
supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like Asgard
managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.

API Object
Replication controller is a top-level resource in the Kubernetes REST API. More details about the API object can be found at:
ReplicationController API object.

Alternatives to ReplicationController
ReplicaSet
ReplicaSet is the next-generation ReplicationController that supports the new set-based label selector. It's mainly used by
Deployment as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments
instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.

Deployment (Recommended)
Deploymentis a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if
you want the rolling update functionality, because they are declarative, server-side, and have additional features.

Bare Pods
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any
reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we
recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process
supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A
ReplicationController delegates local container restarts to some agent on the node, such as the kubelet.
https://kubernetes.io/docs/concepts/_print/ 192/609
7/10/24, 9:28 AM Concepts | Kubernetes

Job
Use a Job instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs).

DaemonSet
Use a DaemonSet instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or
machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before
other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

What's next
Learn about Pods.
Learn about Deployment, the replacement for ReplicationController.
ReplicationController is part of the Kubernetes REST API. Read the ReplicationController object definition to understand the
API for replication controllers.

https://kubernetes.io/docs/concepts/_print/ 193/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.3 - Autoscaling Workloads


With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster
to react to changes in resource demand more elastically and efficiently.

In Kubernetes, you can scale a workload depending on the current demand of resources. This allows your cluster to react to changes
in resource demand more elastically and efficiently.

When you scale a workload, you can either increase or decrease the number of replicas managed by the workload, or adjust the
resources available to the replicas in-place.

The first approach is referred to as horizontal scaling, while the second is referred to as vertical scaling.

There are manual and automatic ways to scale your workloads, depending on your use case.

Scaling workloads manually


Kubernetes supports manual scaling of workloads. Horizontal scaling can be done using the kubectl CLI. For vertical scaling, you
need to patch the resource definition of your workload.

See below for examples of both strategies.

Horizontal scaling: Running multiple instances of your app


Vertical scaling: Resizing CPU and memory resources assigned to containers

Scaling workloads automatically


Kubernetes also supports automatic scaling of workloads, which is the focus of this page.

The concept of Autoscaling in Kubernetes refers to the ability to automatically update an object that manages a set of Pods (for
example a Deployment).

Scaling workloads horizontally


In Kubernetes, you can automatically scale a workload horizontally using a HorizontalPodAutoscaler (HPA).

It is implemented as a Kubernetes API resource and a controller and periodically adjusts the number of replicas in a workload to
match observed resource utilization such as CPU or memory usage.

There is a walkthrough tutorial of configuring a HorizontalPodAutoscaler for a Deployment.

Scaling workloads vertically

ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

You can automatically scale a workload vertically using a VerticalPodAutoscaler (VPA). Unlike the HPA, the VPA doesn't come with
Kubernetes by default, but is a separate project that can be found on GitHub.

Once installed, it allows you to create CustomResourceDefinitions (CRDs) for your workloads which define how and when to scale the
resources of the managed replicas.

Note:
You will need to have the Metrics Server installed to your cluster for the HPA to work.

At the moment, the VPA can operate in four different modes:

https://kubernetes.io/docs/concepts/_print/ 194/609
7/10/24, 9:28 AM Concepts | Kubernetes

Mode Description

Auto Currently, Recreate might change to in-place updates in the future

Recreate The VPA assigns resource requests on pod creation as well as updates them on existing pods by evicting them
when the requested resources differ significantly from the new recommendation

Initial The VPA only assigns resource requests on pod creation and never changes them later.

Off The VPA does not automatically change the resource requirements of the pods. The recommendations are
calculated and can be inspected in the VPA object.

Requirements for in-place resizing

ⓘ FEATURE STATE: Kubernetes v1.27 [alpha]

Resizing a workload in-place without restarting the Pods or its Containers requires Kubernetes version 1.27 or later. Additionally,
the InPlaceVerticalScaling feature gate needs to be enabled.

InPlacePodVerticalScaling : Enables in-place Pod vertical scaling.

Autoscaling based on cluster size


For workloads that need to be scaled based on the size of the cluster (for example cluster-dns or other system components), you
can use the Cluster Proportional Autoscaler. Just like the VPA, it is not part of the Kubernetes core, but hosted as its own project on
GitHub.

The Cluster Proportional Autoscaler watches the number of schedulable nodes and cores and scales the number of replicas of the
target workload accordingly.

If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using the Cluster
Proportional Vertical Autoscaler. The project is currently in beta and can be found on GitHub.

While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or
cores in the cluster.

Event driven Autoscaling


It is also possible to scale workloads based on events, for example using the Kubernetes Event Driven Autoscaler (KEDA).

KEDA is a CNCF graduated enabling you to scale your workloads based on the number of events to be processed, for example the
amount of messages in a queue. There exists a wide range of adapters for different event sources to choose from.

Autoscaling based on schedules


Another strategy for scaling your workloads is to schedule the scaling operations, for example in order to reduce resource
consumption during off-peak hours.

Similar to event driven autoscaling, such behavior can be achieved using KEDA in conjunction with its Cron scaler. The Cron scaler
allows you to define schedules (and time zones) for scaling your workloads in or out.

Scaling cluster infrastructure


If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself.

Scaling the cluster infrastructure normally means adding or removing nodes. Read cluster autoscaling for more information.

https://kubernetes.io/docs/concepts/_print/ 195/609
7/10/24, 9:28 AM Concepts | Kubernetes

What's next
Learn more about scaling horizontally
Scale a StatefulSet
HorizontalPodAutoscaler Walkthrough
Resize Container Resources In-Place
Autoscale the DNS Service in a Cluster
Learn about cluster autoscaling

https://kubernetes.io/docs/concepts/_print/ 196/609
7/10/24, 9:28 AM Concepts | Kubernetes

4.4 - Managing Workloads


You've deployed your application and exposed it via a Service. Now what? Kubernetes provides a number of tools to help you
manage your application deployment, including scaling and updating.

Organizing resource configurations


Many applications require multiple resources to be created, such as a Deployment along with a Service. Management of multiple
resources can be simplified by grouping them together in the same file (separated by --- in YAML). For example:

application/nginx-app.yaml

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Multiple resources can be created the same way as a single resource:

kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml

service/my-nginx-svc created
deployment.apps/my-nginx created

The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will
ensure the scheduler can spread the pods associated with the Service as they are created by the controller(s), such as Deployment.
https://kubernetes.io/docs/concepts/_print/ 197/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl apply also accepts multiple -f arguments:

kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \


-f https://k8s.io/examples/application/nginx/nginx-deployment.yaml

It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all
of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you
can deploy all of the components of your stack together.

A URL can also be specified as a configuration source, which is handy for deploying directly from manifests in your source control
system:

kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml

deployment.apps/my-nginx created

If you need to define more manifests, such as adding a ConfigMap, you can do that too.

External tools
This section lists only the most common tools used for managing workloads on Kubernetes. To see a larger list, view Application
definition and image build in the CNCF Landscape.

Helm

🛇 This item links to a third party project or product that is not part of Kubernetes itself. More information

Helm is a tool for managing packages of pre-configured Kubernetes resources. These packages are known as Helm charts.

Kustomize
Kustomize traverses a Kubernetes manifest to add, remove or update configuration options. It is available both as a standalone
binary and as a native feature of kubectl.

Bulk operations in kubectl


Resource creation isn't the only operation that kubectl can perform in bulk. It can also extract resource names from configuration
files in order to perform other operations, in particular to delete the same resources you created:

kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml

deployment.apps "my-nginx" deleted


service "my-nginx-svc" deleted

In the case of two resources, you can specify both resources on the command line using the resource/name syntax:

kubectl delete deployments/my-nginx services/my-nginx-svc

https://kubernetes.io/docs/concepts/_print/ 198/609
7/10/24, 9:28 AM Concepts | Kubernetes

For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector , to filter
resources by their labels:

kubectl delete deployment,services -l app=nginx

deployment.apps "my-nginx" deleted


service "my-nginx-svc" deleted

Chaining and filtering


Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs :

kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )


kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'

The output might be similar to:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE


my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s

With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o
name output format (print each resource as resource/name). Then you grep only the Service, and then print it with kubectl get .

Recursive operations on local files


If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the
operations on the subdirectories also, by specifying --recursive or -R alongside the --filename / -f argument.

For instance, assume there is a directory project/k8s/development that holds all of the manifests needed for the development
environment, organized by resource type:

project/k8s/development
├── configmap
│ └── my-configmap.yaml
├── deployment
│ └── my-deployment.yaml
└── pvc
└── my-pvc.yaml

By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any
subdirectories. If you had tried to create the resources in this directory using the following command, we would have encountered
an error:

kubectl apply -f project/k8s/development

error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)

Instead, specify the --recursive or -R command line argument along with the --filename / -f argument:

kubectl apply -f project/k8s/development --recursive

https://kubernetes.io/docs/concepts/_print/ 199/609
7/10/24, 9:28 AM Concepts | Kubernetes

configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created

The --recursive argument works with any operation that accepts the --filename / -f argument such as: kubectl create , kubectl
get , kubectl delete , kubectl describe , or even kubectl rollout .

The --recursive argument also works when multiple -f arguments are provided:

kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive

namespace/development created
namespace/staging created
configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created

If you're interested in learning more about kubectl , go ahead and read Command line tool (kubectl).

Updating your application without an outage


At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag. kubectl
supports several update operations, each of which is applicable to different scenarios.

You can run multiple copies of your app, and use a rollout to gradually shift the traffic to new healthy Pods. Eventually, all the
running Pods would have the new software.

This section of the page guides you through how to create and update applications with Deployments.

Let's say you were running version 1.14.2 of nginx:

kubectl create deployment my-nginx --image=nginx:1.14.2

deployment.apps/my-nginx created

Ensure that there is 1 replica:

kubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{"spec":{"replicas": 1}}'

deployment.apps/my-nginx scaled

and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:

kubectl patch --type='merge' -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge": "100%" }}}}'

deployment.apps/my-nginx patched

To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl
edit :

https://kubernetes.io/docs/concepts/_print/ 200/609
7/10/24, 9:28 AM Concepts | Kubernetes

kubectl edit deployment/my-nginx


# Change the manifest to use the newer container image, then save your changes

That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that
only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be
created above the desired number of pods. To learn more details about how this happens, visit Deployment.

You can use rollouts with DaemonSets, Deployments, or StatefulSets.

Managing rollouts
You can use kubectl rollout to manage a progressive update of an existing application.

For example:

kubectl apply -f my-deployment.yaml

# wait for rollout to finish


kubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout

or

kubectl apply -f backing-stateful-component.yaml

# don't wait for rollout to finish, just check the status


kubectl rollout status statefulsets/backing-stateful-component --watch=false

You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.

Canary deployments
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same
component. It is common practice to deploy a canary of a new application release (specified via image tag in the pod template) side
by side with the previous release so that the new release can receive live production traffic before fully rolling it out.

For instance, you can use a track label to differentiate different releases.

The primary, stable release would have a track label with value as stable :

name: frontend
replicas: 3
...
labels:
app: guestbook
tier: frontend
track: stable
...
image: gb-frontend:v3

and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary ), so
that two sets of pods would not overlap:

https://kubernetes.io/docs/concepts/_print/ 201/609
7/10/24, 9:28 AM Concepts | Kubernetes

name: frontend-canary
replicas: 1
...
labels:
app: guestbook
tier: frontend
track: canary
...
image: gb-frontend:v4

The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label),
so that the traffic will be redirected to both applications:

selector:
app: guestbook
tier: frontend

You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live
production traffic (in this case, 3:1). Once you're confident, you can update the stable track to the new application release and
remove the canary one.

Updating annotations
Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by
API clients such as tools or libraries. This can be done with kubectl annotate . For example:

kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'


kubectl get pods my-nginx-v4-9gw19 -o yaml

apiVersion: v1
kind: pod
metadata:
annotations:
description: my frontend running nginx
...

For more information, see annotations and kubectl annotate.

Scaling your application


When load on your application grows or shrinks, use kubectl to scale your application. For instance, to decrease the number of
nginx replicas from 3 to 1, do:

kubectl scale deployment/my-nginx --replicas=1

deployment.apps/my-nginx scaled

Now you only have one pod managed by the deployment.

kubectl get pods -l app=nginx

https://kubernetes.io/docs/concepts/_print/ 202/609
7/10/24, 9:28 AM Concepts | Kubernetes

NAME READY STATUS RESTARTS AGE


my-nginx-2035384211-j5fhi 1/1 Running 0 30m

To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:

# This requires an existing source of container and Pod metrics


kubectl autoscale deployment/my-nginx --min=1 --max=3

horizontalpodautoscaler.autoscaling/my-nginx autoscaled

Now your nginx replicas will be scaled up and down as needed, automatically.

For more information, please see kubectl scale, kubectl autoscale and horizontal pod autoscaler document.

In-place updates of resources


Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created.

kubectl apply
It is suggested to maintain a set of configuration files in source control (see configuration as code), so that they can be maintained
and versioned along with the code for the resources they configure. Then, you can use kubectl apply to push your configuration
changes to the cluster.

This command will compare the version of the configuration that you're pushing with the previous version and apply the changes
you've made, without overwriting any automated changes to properties you haven't specified.

kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml

deployment.apps/my-nginx configured

To learn more about the underlying mechanism, read server-side apply.

kubectl edit
Alternatively, you may also update resources with kubectl edit :

kubectl edit deployment/my-nginx

This is equivalent to first get the resource, edit it in text editor, and then apply the resource with the updated version:

kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml


vi /tmp/nginx.yaml
# do some edit, and then save the file

kubectl apply -f /tmp/nginx.yaml


deployment.apps/my-nginx configured

rm /tmp/nginx.yaml

This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR
environment variables.
https://kubernetes.io/docs/concepts/_print/ 203/609
7/10/24, 9:28 AM Concepts | Kubernetes

For more information, please see kubectl edit.

kubectl patch
You can use kubectl patch to update API objects in place. This subcommand supports JSON patch, JSON merge patch, and strategic
merge patch.

See Update API Objects in Place Using kubectl patch for more details.

Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive
change immediately, such as to fix broken pods created by a Deployment. To change such fields, use replace --force , which deletes
and re-creates the resource. In this case, you can modify your original configuration file:

kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force

deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced

What's next
Learn about how to use kubectl for application introspection and debugging.

https://kubernetes.io/docs/concepts/_print/ 204/609
7/10/24, 9:28 AM Concepts | Kubernetes

5 - Services, Load Balancing, and Networking


Concepts and resources behind networking in Kubernetes.

The Kubernetes network model


Every Pod in a cluster gets its own unique cluster-wide IP address (one address per IP address family). This means you do not need
to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.
This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the
perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network
segmentation policies):

pods can communicate with all other pods on any other node without NAT
agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node

Note:
For those platforms that support Pods running in the host network (such as Linux), when pods are attached to the host network
of a node they can still communicate with all pods on all nodes without NAT.

This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction
porting of apps from VMs to containers. If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your
project. This is the same basic model.

Kubernetes IP addresses exist at the Pod scope - containers within a Pod share their network namespaces - including their IP
address and MAC address. This means that containers within a Pod can all reach each other's ports on localhost . This also means
that containers within a Pod must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-
pod" model.

How this is implemented is a detail of the particular container runtime in use.

It is possible to request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation.
How that forwarding is implemented is also a detail of the container runtime. The Pod itself is blind to the existence or non-
existence of host ports.

Kubernetes networking addresses four concerns:

Containers within a Pod use networking to communicate via loopback.


Cluster networking provides communication between different Pods.
The Service API lets you expose an application running in Pods to be reachable from outside your cluster.
Ingress provides extra functionality specifically for exposing HTTP applications, websites and APIs.
Gateway API is an add-on that provides an expressive, extensible, and role-oriented family of API kinds for modeling
service networking.
You can also use Services to publish services only for consumption inside your cluster.

The Connecting Applications with Services tutorial lets you learn about Services and Kubernetes networking with a hands-on
example.

Cluster Networking explains how to set up networking for your cluster, and also provides an overview of the technologies involved.

https://kubernetes.io/docs/concepts/_print/ 205/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.1 - Service
Expose an application running in your cluster behind a single outward-facing endpoint, even when the
workload is split across multiple backends.

In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.

A key aim of Services in Kubernetes is that you don't need to modify your existing application to use an unfamiliar service discovery
mechanism. You can run code in Pods, whether this is a code designed for a cloud-native world, or an older app you've
containerized. You use a Service to make that set of Pods available on the network so that clients can interact with it.

If you use a Deployment to run your app, that Deployment can create and destroy Pods dynamically. From one moment to the next,
you don't know how many of those Pods are working and healthy; you might not even know what those healthy Pods are named.
Kubernetes Pods are created and destroyed to match the desired state of your cluster. Pods are ephemeral resources (you should
not expect that an individual Pod is reliable and durable).

Each Pod gets its own IP address (Kubernetes expects network plugins to ensure this). For a given Deployment in your cluster, the
set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.

This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside
your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the
backend part of the workload?

Enter Services.

Services in Kubernetes
The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network. Each Service object defines
a logical set of endpoints (usually these endpoints are Pods) along with a policy about how to make those pods accessible.

For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends
do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should
not need to be aware of that, nor should they need to keep track of the set of backends themselves.

The Service abstraction enables this decoupling.

The set of Pods targeted by a Service is usually determined by a selector that you define. To learn about other ways to define Service
endpoints, see Services without selectors.

If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic reaches that workload. Ingress is not a
Service type, but it acts as the entry point for your cluster. An Ingress lets you consolidate your routing rules into a single resource,
so that you can expose multiple components of your workload, running separately in your cluster, behind a single listener.

The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You can add Gateway to your cluster - it is a
family of extension APIs, implemented using CustomResourceDefinitions - and then use these to configure access to network
services that are running in your cluster.

Cloud-native service discovery


If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for matching
EndpointSlices. Kubernetes updates the EndpointSlices for a Service whenever the set of Pods in a Service changes.

For non-native applications, Kubernetes offers ways to place a network port or load balancer in between your application and the
backend Pods.

Either way, your workload can use these service discovery mechanisms to find the target it wants to connect to.

Defining a Service
A Service is an object (the same way that a Pod or a ConfigMap is an object). You can create, view or modify Service definitions using
the Kubernetes API. Usually you use a tool such as kubectl to make those API calls for you.

https://kubernetes.io/docs/concepts/_print/ 206/609
7/10/24, 9:28 AM Concepts | Kubernetes

For example, suppose you have a set of Pods that each listen on TCP port 9376 and are labelled as app.kubernetes.io/name=MyApp .
You can define a Service to publish that TCP listener:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376

Applying this manifest creates a new Service named "my-service" with the default ClusterIP service type. The Service targets TCP port
9376 on any Pod with the app.kubernetes.io/name: MyApp label.

Kubernetes assigns this Service an IP address (the cluster IP), that is used by the virtual IP address mechanism. For more details on
that mechanism, read Virtual IPs and Service Proxies.

The controller for that Service continuously scans for Pods that match its selector, and then makes any necessary updates to the set
of EndpointSlices for the Service.

The name of a Service object must be a valid RFC 1035 label name.

Note:
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as
the port field.

Port definitions
Port definitions in Pods have names, and you can reference these names in the targetPort attribute of a Service. For example, we
can bind the targetPort of the Service to the Pod port in the following way:

https://kubernetes.io/docs/concepts/_print/ 207/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc

---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc

This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available
via different port numbers. This offers a lot of flexibility for deploying and evolving your Services. For example, you can change the
port numbers that Pods expose in the next version of your backend software, without breaking clients.

The default protocol for Services is TCP; you can also use any other supported protocol.

Because many Services need to expose more than one port, Kubernetes supports multiple port definitions for a single Service. Each
port definition can have the same protocol , or a different one.

Services without selectors


Services most commonly abstract access to Kubernetes Pods thanks to the selector, but when used with a corresponding set of
EndpointSlices objects and without a selector, the Service can abstract other kinds of backends, including ones that run outside the
cluster.

For example:

You want to have an external database cluster in production, but in your test environment you use your own databases.
You want to point your Service to a Service in a different Namespace or on another cluster.
You are migrating a workload to Kubernetes. While evaluating the approach, you run only a portion of your backends in
Kubernetes.

In any of these scenarios you can define a Service without specifying a selector to match Pods. For example:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376

https://kubernetes.io/docs/concepts/_print/ 208/609
7/10/24, 9:28 AM Concepts | Kubernetes

Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints) objects are not created automatically.
You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually. For
example:

apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: my-service-1 # by convention, use the name of the Service
# as a prefix for the name of the EndpointSlice
labels:
# You should set the "kubernetes.io/service-name" label.
# Set its value to match the name of the Service
kubernetes.io/service-name: my-service
addressType: IPv4
ports:
- name: http # should match with the name of the service port defined above
appProtocol: http
protocol: TCP
port: 9376
endpoints:
- addresses:
- "10.4.5.6"
- addresses:
- "10.1.2.3"

Custom EndpointSlices
When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a
namespace must have a unique name. You link an EndpointSlice to a Service by setting the kubernetes.io/service-name label on that
EndpointSlice.

Note:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for
IPv4, fe80::/64 for IPv6).

The endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because kube-proxy doesn't support virtual
IPs as a destination.

For an EndpointSlice that you create yourself, or in your own code, you should also pick a value to use for the label
endpointslice.kubernetes.io/managed-by . If you create your own controller code to manage EndpointSlices, consider using a value
similar to "my-domain.example/name-of-controller" . If you are using a third party tool, use the name of the tool in all-lowercase and
change spaces and other punctuation to dashes ( - ). If people are directly using a tool such as kubectl to manage EndpointSlices,
use a name that describes this manual management, such as "staff" or "cluster-admins" . You should avoid using the reserved
value "controller" , which identifies EndpointSlices managed by Kubernetes' own control plane.

Accessing a Service without a selector


Accessing a Service without a selector works the same as if it had a selector. In the example for a Service without a selector, traffic is
routed to one of the two endpoints defined in the EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 9376.

Note:
The Kubernetes API server does not allow proxying to endpoints that are not mapped to pods. Actions such as kubectl port-
forward service/<service-name> forwardedPort:servicePort where the service has no selector will fail due to this constraint. This
prevents the Kubernetes API server from being used as a proxy to endpoints the caller may not be authorized to access.

An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more
information, see the ExternalName section.

https://kubernetes.io/docs/concepts/_print/ 209/609
7/10/24, 9:28 AM Concepts | Kubernetes

EndpointSlices

ⓘ FEATURE STATE: Kubernetes v1.21 [stable]

EndpointSlices are objects that represent a subset (a slice) of the backing network endpoints for a Service.

Your Kubernetes cluster tracks how many endpoints each EndpointSlice represents. If there are so many endpoints for a Service that
a threshold is reached, then Kubernetes adds another empty EndpointSlice and stores new endpoint information there. By default,
Kubernetes makes a new EndpointSlice once the existing EndpointSlices all contain at least 100 endpoints. Kubernetes does not
make the new EndpointSlice until an extra endpoint needs to be added.

See EndpointSlices for more information about this API.

Endpoints
In the Kubernetes API, an Endpoints (the resource kind is plural) defines a list of network endpoints, typically referenced by a Service
to define which Pods the traffic can be sent to.

The EndpointSlice API is the recommended replacement for Endpoints.

Over-capacity endpoints
Kubernetes limits the number of endpoints that can fit in a single Endpoints object. When there are over 1000 backing endpoints for
a Service, Kubernetes truncates the data in the Endpoints object. Because a Service can be linked with more than one EndpointSlice,
the 1000 backing endpoint limit only affects the legacy Endpoints API.

In that case, Kubernetes selects at most 1000 possible backend endpoints to store into the Endpoints object, and sets an annotation
on the Endpoints: endpoints.kubernetes.io/over-capacity: truncated . The control plane also removes that annotation if the number
of backend Pods drops below 1000.

Traffic is still sent to backends, but any load balancing mechanism that relies on the legacy Endpoints API only sends traffic to at
most 1000 of the available backing endpoints.

The same API limit means that you cannot manually update an Endpoints to have more than 1000 endpoints.

Application protocol

ⓘ FEATURE STATE: Kubernetes v1.20 [stable]

The appProtocol field provides a way to specify an application protocol for each Service port. This is used as a hint for
implementations to offer richer behavior for protocols that they understand. The value of this field is mirrored by the corresponding
Endpoints and EndpointSlice objects.

This field follows standard Kubernetes label syntax. Valid values are one of:

IANA standard service names.

Implementation-defined prefixed names such as mycompany.com/my-custom-protocol .

Kubernetes-defined prefixed names:

Protocol Description

kubernetes.io/h2c HTTP/2 over cleartext as described in RFC 7540

kubernetes.io/ws WebSocket over cleartext as described in RFC 6455

kubernetes.io/wss WebSocket over TLS as described in RFC 6455

https://kubernetes.io/docs/concepts/_print/ 210/609
7/10/24, 9:28 AM Concepts | Kubernetes

Multi-port Services
For some Services, you need to expose more than one port. Kubernetes lets you configure multiple port definitions on a Service
object. When using multiple ports for a Service, you must give all of your ports names so that these are unambiguous. For example:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377

Note:
As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and - . Port
names must also start and end with an alphanumeric character.

For example, the names 123-abc and web are valid, but 123_abc and -web are not.

Service type
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, one that's
accessible from outside of your cluster.

Kubernetes Service types allow you to specify what kind of Service you want.

The available type values and their behaviors are:

ClusterIP

Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is
the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using
an Ingress or a Gateway.

NodePort

Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a
cluster IP address, the same as if you had requested a Service of type: ClusterIP.

LoadBalancer

Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component;
you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

ExternalName

Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping
configures your cluster's DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set
up.

The type field in the Service API is designed as nested functionality - each level adds to the previous. However there is an exception
to this nested design. You can define a LoadBalancer Service by disabling the load balancer NodePort allocation.

https://kubernetes.io/docs/concepts/_print/ 211/609
7/10/24, 9:28 AM Concepts | Kubernetes

type: ClusterIP
This default Service type assigns an IP address from a pool of IP addresses that your cluster has reserved for that purpose.

Several of the other types for Service build on the ClusterIP type as a foundation.

If you define a Service that has the .spec.clusterIP set to "None" then Kubernetes does not assign an IP address. See headless
Services for more information.

Choosing your own IP address


You can specify your own cluster IP address as part of a Service creation request. To do this, set the .spec.clusterIP field. For
example, if you already have an existing DNS entry that you wish to reuse, or legacy systems that are configured for a specific IP
address and difficult to re-configure.

The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range that is
configured for the API server. If you try to create a Service with an invalid clusterIP address value, the API server will return a 422
HTTP status code to indicate that there's a problem.

Read avoiding collisions to learn how Kubernetes helps reduce the risk and impact of two different Services both trying to use the
same IP address.

type: NodePort
If you set the type field to NodePort , the Kubernetes control plane allocates a port from a range specified by --service-node-port-
range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service
reports the allocated port in its .spec.ports[*].nodePort field.

Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully
supported by Kubernetes, or even to expose one or more nodes' IP addresses directly.

For a node port Service, Kubernetes additionally allocates a port (TCP, UDP or SCTP to match the protocol of the Service). Every node
in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with
that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the
appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service).

Choosing your own port


If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port
or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to
use a valid port number, one that's inside the range configured for NodePort use.

Here is an example manifest for a Service of type: NodePort that specifies a NodePort value (30007, in this example):

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: MyApp
ports:
- port: 80
# By default and for convenience, the `targetPort` is set to
# the same value as the `port` field.
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane
# will allocate a port from a range (default: 30000-32767)
nodePort: 30007

Reserve Nodeport ranges to avoid collisions


https://kubernetes.io/docs/concepts/_print/ 212/609
7/10/24, 9:28 AM Concepts | Kubernetes

ⓘ FEATURE STATE: Kubernetes v1.29 [stable]

The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios.
When a user wants to create a NodePort service that uses a specific port, the target port may conflict with another port that has
already been assigned.

To avoid this problem, the port range for NodePort services is divided into two bands. Dynamic port assignment uses the upper
band by default, and it may use the lower band once the upper band has been exhausted. Users can then allocate from the lower
band with a lower risk of port collision.

Custom IP address configuration for type: NodePort Services

You can set up nodes in your cluster to use a particular IP address for serving node port services. You might want to do this if each
node is connected to multiple networks (for example: one network for application traffic, and another network for traffic between
nodes and the control plane).

If you want to specify particular IP address(es) to proxy the port, you can set the --nodeport-addresses flag for kube-proxy or the
equivalent nodePortAddresses field of the kube-proxy configuration file to particular IP block(s).

This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8 , 192.0.2.0/25 ) to specify IP address ranges that kube-proxy
should consider as local to this node.

For example, if you start kube-proxy with the --nodeport-addresses=127.0.0.0/8 flag, kube-proxy only selects the loopback interface
for NodePort Services. The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all
available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases.)

Note:
This Service is visible as <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port. If the --nodeport-addresses flag
for kube-proxy or the equivalent field in the kube-proxy configuration file is set, <NodeIP> would be a filtered node IP address (or
possibly IP addresses).

type: LoadBalancer
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your
Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is
published in the Service's .status.loadBalancer field. For example:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.0.2.127

Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.

To implement a Service of type: LoadBalancer , Kubernetes typically starts off by making the changes that are equivalent to you
requesting a Service of type: NodePort . The cloud-controller-manager component then configures the external load balancer to
forward traffic to that assigned node port.

https://kubernetes.io/docs/concepts/_print/ 213/609
7/10/24, 9:28 AM Concepts | Kubernetes

You can configure a load balanced Service to omit assigning a node port, provided that the cloud provider implementation supports
this.

Some cloud providers allow you to specify the loadBalancerIP . In those cases, the load-balancer is created with the user-specified
loadBalancerIP . If the loadBalancerIP field is not specified, the load balancer is set up with an ephemeral IP address. If you specify a
loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.

Note:
The .spec.loadBalancerIP field for a Service was deprecated in Kubernetes v1.24.

This field was under-specified and its meaning varies across implementations. It also cannot support dual-stack networking. This
field may be removed in a future API version.

If you're integrating with a provider that supports specifying the load balancer IP address(es) for a Service via a (provider
specific) annotation, you should switch to doing that.

If you are writing code for a load balancer integration with Kubernetes, avoid using this field. You can integrate with Gateway
rather than Service, or you can define your own (provider specific) annotations on the Service that specify the equivalent detail.

Node liveness impact on load balancer traffic


Load balancer health checks are critical to modern applications. They are used to determine which server (virtual machine, or IP
address) the load balancer should dispatch traffic to. The Kubernetes APIs do not define how health checks have to be implemented
for Kubernetes managed load balancers, instead it's the cloud providers (and the people implementing integration code) who decide
on the behavior. Load balancer health checks are extensively used within the context of supporting the externalTrafficPolicy field
for Services.

Load balancers with mixed protocol types

ⓘ FEATURE STATE: Kubernetes v1.26 [stable]

By default, for LoadBalancer type of Services, when there is more than one port defined, all ports must have the same protocol, and
the protocol must be one which is supported by the cloud provider.

The feature gate MixedProtocolLBService (enabled by default for the kube-apiserver as of v1.24) allows the use of different protocols
for LoadBalancer type of Services, when there is more than one port defined.

Note:
The set of protocols that can be used for load balanced Services is defined by your cloud provider; they may impose restrictions
beyond what the Kubernetes API enforces.

Disabling load balancer NodePort allocation

ⓘ FEATURE STATE: Kubernetes v1.24 [stable]

You can optionally disable node port allocation for a Service of type: LoadBalancer , by setting the field
spec.allocateLoadBalancerNodePorts to false . This should only be used for load balancer implementations that route traffic directly
to pods as opposed to using node ports. By default, spec.allocateLoadBalancerNodePorts is true and type LoadBalancer Services will
continue to allocate node ports. If spec.allocateLoadBalancerNodePorts is set to false on an existing Service with allocated node
ports, those node ports will not be de-allocated automatically. You must explicitly remove the nodePorts entry in every Service port
to de-allocate those node ports.

Specifying class of load balancer implementation

ⓘ FEATURE STATE: Kubernetes v1.24 [stable]

https://kubernetes.io/docs/concepts/_print/ 214/609
7/10/24, 9:28 AM Concepts | Kubernetes

For a Service with type set to LoadBalancer , the .spec.loadBalancerClass field enables you to use a load balancer implementation
other than the cloud provider default.

By default, .spec.loadBalancerClass is not set and a LoadBalancer type of Service uses the cloud provider's default load balancer
implementation if the cluster is configured with a cloud provider using the --cloud-provider component flag.

If you specify .spec.loadBalancerClass , it is assumed that a load balancer implementation that matches the specified class is
watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore
Services that have this field set. spec.loadBalancerClass can be set on a Service of type LoadBalancer only. Once set, it cannot be
changed. The value of spec.loadBalancerClass must be a label-style identifier, with an optional prefix such as " internal-vip " or
" example.com/internal-vip ". Unprefixed names are reserved for end-users.

Specifying IPMode of load balancer status

ⓘ FEATURE STATE: Kubernetes v1.30 [beta]

As a Beta feature in Kubernetes 1.30, a feature gate named LoadBalancerIPMode allows you to set the
.status.loadBalancer.ingress.ipMode for a Service with type set to LoadBalancer . The .status.loadBalancer.ingress.ipMode specifies
how the load-balancer IP behaves. It may be specified only when the .status.loadBalancer.ingress.ip field is also specified.

There are two possible values for .status.loadBalancer.ingress.ipMode : "VIP" and "Proxy". The default value is "VIP" meaning that
traffic is delivered to the node with the destination set to the load-balancer's IP and port. There are two cases when setting this to
"Proxy", depending on how the load-balancer from the cloud provider delivers the traffics:

If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node's IP and node port;
If the traffic is delivered directly to the pod, the destination would be set to the pod's IP and port.

Service implementations may use this information to adjust traffic routing.

Internal load balancer


In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block.

In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your
endpoints.

To set an internal load balancer, add one of the following annotations to your Service depending on the cloud service provider you're
using:

Default GCP AWS Azure IBM Cloud OpenStack Baidu Cloud Tencent Cloud Alibaba Cloud OCI

Select one of the tabs.

type: ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra . You specify
these Services with the spec.externalName parameter.

This Service definition, for example, maps the my-service Service in the prod namespace to my.database.example.com :

apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com

Note:
https://kubernetes.io/docs/concepts/_print/ 215/609
7/10/24, 9:28 AM Concepts | Kubernetes

A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as
an IP address (the internet does not however allow such names in DNS). Services with external names that resemble IPv4
addresses are not resolved by DNS servers.

If you want to map a Service directly to a specific IP address, consider using headless Services.

When looking up the host my-service.prod.svc.cluster.local , the cluster DNS Service returns a CNAME record with the value
my.database.example.com . Accessing my-service works in the same way as other Services but with the crucial difference that
redirection happens at the DNS level rather than via proxying or forwarding. Should you later decide to move your database into
your cluster, you can start its Pods, add appropriate selectors or endpoints, and change the Service's type .

Caution:
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName
then the hostname used by clients inside your cluster is different from the name that the ExternalName references.

For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a Host:
header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that
the client connected to.

Headless Services
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed headless Services, by
explicitly specifying "None" for the cluster IP address ( .spec.clusterIP ).

You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes'
implementation.

For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or
proxying done by the platform for them.

A headless Service allows a client to connect to whichever Pod it prefers, directly. Services that are headless don't configure routes
and packet forwarding using virtual IP addresses and proxies; instead, headless Services report the endpoint IP addresses of the
individual pods via internal DNS records, served through the cluster's DNS service. To define a headless Service, you make a Service
with .spec.type set to ClusterIP (which is also the default for type ), and you additionally set .spec.clusterIP to None.

The string value None is a special case and is not the same as leaving the .spec.clusterIP field unset.

How DNS is automatically configured depends on whether the Service has selectors defined:

With selectors
For headless Services that define selectors, the endpoints controller creates EndpointSlices in the Kubernetes API, and modifies the
DNS configuration to return A or AAAA records (IPv4 or IPv6 addresses) that point directly to the Pods backing the Service.

Without selectors
For headless Services that do not define selectors, the control plane does not create EndpointSlice objects. However, the DNS
system looks for and configures either:

DNS CNAME records for type: ExternalName Services.


DNS A / AAAA records for all IP addresses of the Service's ready endpoints, for all Service types other than ExternalName .
For IPv4 endpoints, the DNS system creates A records.
For IPv6 endpoints, the DNS system creates AAAA records.

When you define a headless Service without a selector, the port must match the targetPort .

https://kubernetes.io/docs/concepts/_print/ 216/609
7/10/24, 9:28 AM Concepts | Kubernetes

Discovering services
For clients running inside your cluster, Kubernetes supports two primary modes of finding a Service: environment variables and
DNS.

Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST
and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.

For example, the Service redis-primary which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces
the following environment variables:

REDIS_PRIMARY_SERVICE_HOST=10.0.0.11
REDIS_PRIMARY_SERVICE_PORT=6379
REDIS_PRIMARY_PORT=tcp://10.0.0.11:6379
REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp
REDIS_PRIMARY_PORT_6379_TCP_PORT=6379
REDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11

Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port
and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client
Pods won't have their environment variables populated.

If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.

Kubernetes also supports and provides variables that are compatible with Docker Engine's "legacy container links" feature. You can
read makeLinkVariables to see how this is implemented in Kubernetes.

DNS
You can (and almost always should) set up a DNS service for your Kubernetes cluster using an add-on.

A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records for
each one. If DNS has been enabled throughout your cluster then all Pods should automatically be able to resolve Services by their
DNS name.

For example, if you have a Service called my-service in a Kubernetes namespace my-ns , the control plane and the DNS Service
acting together create a DNS record for my-service.my-ns . Pods in the my-ns namespace should be able to find the service by doing
a name lookup for my-service ( my-service.my-ns would also work).

Pods in other namespaces must qualify the name as my-service.my-ns . These names will resolve to the cluster IP assigned for the
Service.

Kubernetes also supports DNS SRV (Service) records for named ports. If the my-service.my-ns Service has a port named http with
the protocol set to TCP , you can do a DNS SRV query for _http._tcp.my-service.my-ns to discover the port number for http , as well
as the IP address.

The Kubernetes DNS server is the only way to access ExternalName Services. You can find more information about ExternalName
resolution in DNS for Services and Pods.

Virtual IP addressing mechanism


Read Virtual IPs and Service Proxies explains the mechanism Kubernetes provides to expose a Service with a virtual IP address.

Traffic policies
You can set the .spec.internalTrafficPolicy and .spec.externalTrafficPolicy fields to control how Kubernetes routes traffic to
healthy (“ready”) backends.
https://kubernetes.io/docs/concepts/_print/ 217/609
7/10/24, 9:28 AM Concepts | Kubernetes

See Traffic Policies for more details.

Traffic distribution

ⓘ FEATURE STATE: Kubernetes v1.30 [alpha]

The .spec.trafficDistribution field provides another way to influence traffic routing within a Kubernetes Service. While traffic
policies focus on strict semantic guarantees, traffic distribution allows you to express preferences (such as routing to topologically
closer endpoints). This can help optimize for performance, cost, or reliability. This optional field can be used if you have enabled the
ServiceTrafficDistribution feature gate for your cluster and all of its nodes. In Kubernetes 1.30, the following field value is
supported:

PreferClose

Indicates a preference for routing traffic to endpoints that are topologically proximate to the client. The interpretation of
"topologically proximate" may vary across implementations and could encompass endpoints within the same node, rack, zone, or
even region. Setting this value gives implementations permission to make different tradeoffs, e.g. optimizing for proximity rather
than equal distribution of load. Users should not set this value if such tradeoffs are not acceptable.

If the field is not set, the implementation will apply its default routing strategy.

See Traffic Distribution for more details

Session stickiness
If you want to make sure that connections from a particular client are passed to the same Pod each time, you can configure session
affinity based on the client's IP address. Read session affinity to learn more.

External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs . When
network traffic arrives into the cluster, with the external IP (as destination IP) and the port matching that Service, rules and routes
that Kubernetes has configured ensure that the traffic is routed to one of the endpoints for that Service.

When you define a Service, you can specify externalIPs for any service type. In the example below, the Service named "my-service"
can be accessed by clients using TCP, on "198.51.100.32:80" (calculated from .spec.externalIPs[] and .spec.ports[].port ).

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 49152
externalIPs:
- 198.51.100.32

Note:
Kubernetes does not manage allocation of externalIPs; these are the responsibility of the cluster administrator.

API Object
Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.
https://kubernetes.io/docs/concepts/_print/ 218/609
7/10/24, 9:28 AM Concepts | Kubernetes

What's next
Learn more about Services and how they fit into Kubernetes:

Follow the Connecting Applications with Services tutorial.


Read about Ingress, which exposes HTTP and HTTPS routes from outside the cluster to Services within your cluster.
Read about Gateway, an extension to Kubernetes that provides more flexibility than Ingress.

For more context, read the following:

Virtual IPs and Service Proxies


EndpointSlices
Service API reference
EndpointSlice API reference
Endpoint API reference (legacy)

https://kubernetes.io/docs/concepts/_print/ 219/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.2 - Ingress
Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that
understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to
different backends based on rules you define via the Kubernetes API.

ⓘ FEATURE STATE: Kubernetes v1.19 [stable]

An API object that manages external access to the services in a cluster, typically HTTP.

Ingress may provide load balancing, SSL termination and name-based virtual hosting.

Note:
Ingress is frozen. New features are being added to the Gateway API.

Terminology
For clarity, this guide defines the following terms:

Node: A worker machine in Kubernetes, part of a cluster.


Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For this example, and in most common
Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or
a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes
networking model.
Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless mentioned otherwise, Services are
assumed to have virtual IPs only routable within the cluster network.

What is Ingress?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.

Here is a simple example where an Ingress sends all its traffic to one Service:

cluster

Pod

Ingress-managed
client load balancer Ingress routing rule Service

Pod

Figure. Ingress

An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-
based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also
configure your edge router or additional frontends to help handle the traffic.

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses
a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

https://kubernetes.io/docs/concepts/_print/ 220/609
7/10/24, 9:28 AM Concepts | Kubernetes

Prerequisites
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.

You may need to deploy an Ingress controller such as ingress-nginx. You can choose from a number of Ingress controllers.

Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress controllers operate slightly
differently.

Note:
Make sure you review your Ingress controller's documentation to understand the caveats of choosing it.

The Ingress resource


A minimal Ingress resource example:

service/networking/minimal-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80

An Ingress needs apiVersion , kind , metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain
name. For general information about working with config files, see deploying applications, configuring containers, managing
resources. Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the rewrite-target annotation. Different Ingress controllers support different annotations. Review the documentation for your
choice of Ingress controller to learn which annotations are supported.

The Ingress spec has all the information needed to configure a load balancer or proxy server. Most importantly, it contains a list of
rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP(S) traffic.

If the ingressClassName is omitted, a default Ingress class should be defined.

There are some ingress controllers, that work without the definition of a default IngressClass . For example, the Ingress-NGINX
controller can be configured with a flag --watch-ingress-without-class . It is recommended though, to specify the default
IngressClass as shown below.

Ingress rules
Each HTTP rule contains the following information:

An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address
specified. If a host is provided (for example, foo.bar.com), the rules apply to that host.

https://kubernetes.io/docs/concepts/_print/ 221/609
7/10/24, 9:28 AM Concepts | Kubernetes

A list of paths (for example, /testpath ), each of which has an associated backend defined with a service.name and a
service.port.name or service.port.number . Both the host and path must match the content of an incoming request before the
load balancer directs traffic to the referenced Service.
A backend is a combination of Service and port names as described in the Service doc or a custom resource backend by way of
a CRD. HTTP (and HTTPS) requests to the Ingress that match the host and path of the rule are sent to the listed backend.

A defaultBackend is often configured in an Ingress controller to service any requests that do not match a path in the spec.

DefaultBackend
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle
requests in that case. The defaultBackend is conventionally a configuration option of the Ingress controller and is not specified in
your Ingress resources. If no .spec.rules are specified, .spec.defaultBackend must be specified. If defaultBackend is not set, the
handling of requests that do not match any of the rules will be up to the ingress controller (consult the documentation for your
ingress controller to find out how it handles this case).

If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend.

Resource backends
A Resource backend is an ObjectRef to another Kubernetes resource within the same namespace as the Ingress object. A Resource
is a mutually exclusive setting with Service, and will fail validation if both are specified. A common usage for a Resource backend is to
ingress data to an object storage backend with static assets.

service/networking/ingress-resource-backend.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-backend
spec:
defaultBackend:
resource:
apiGroup: k8s.example.com
kind: StorageBucket
name: static-assets
rules:
- http:
paths:
- path: /icons
pathType: ImplementationSpecific
backend:
resource:
apiGroup: k8s.example.com
kind: StorageBucket
name: icon-assets

After creating the Ingress above, you can view it with the following command:

kubectl describe ingress ingress-resource-backend

https://kubernetes.io/docs/concepts/_print/ 222/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: ingress-resource-backend
Namespace: default
Address:
Default backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets
Rules:
Host Path Backends
---- ---- --------
*
/icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets
Annotations: <none>
Events: <none>

Path types
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail
validation. There are three supported path types:

ImplementationSpecific : With this path type, matching is up to the IngressClass. Implementations can treat this as a separate
pathType or treat it identically to Prefix or Exact path types.

Exact : Matches the URL path exactly and with case sensitivity.

: Matches based on a URL path prefix split by / . Matching is case sensitive and done on a path element by element
Prefix
basis. A path element refers to the list of labels in the path split by the / separator. A request is a match for path p if every p is
an element-wise prefix of p of the request path.

Note:
If the last element of the path is a substring of the last element in request path, it is not a match (for example: /foo/bar
matches /foo/bar/baz, but does not match /foo/barbaz).

Examples
Kind Path(s) Request path(s) Matches?

Prefix / (all paths) Yes

Exact /foo /foo Yes

Exact /foo /bar No

Exact /foo /foo/ No

Exact /foo/ /foo No

Prefix /foo /foo , /foo/ Yes

Prefix /foo/ /foo , /foo/ Yes

Prefix /aaa/bb /aaa/bbb No

Prefix /aaa/bbb /aaa/bbb Yes

Prefix /aaa/bbb/ /aaa/bbb Yes, ignores trailing slash

Prefix /aaa/bbb /aaa/bbb/ Yes, matches trailing slash

Prefix /aaa/bbb /aaa/bbb/ccc Yes, matches subpath

Prefix /aaa/bbb /aaa/bbbxyz No, does not match string prefix

https://kubernetes.io/docs/concepts/_print/ 223/609
7/10/24, 9:28 AM Concepts | Kubernetes

Kind Path(s) Request path(s) Matches?

Prefix / , /aaa /aaa/ccc Yes, matches /aaa prefix

Prefix / , /aaa , /aaa/bbb /aaa/bbb Yes, matches /aaa/bbb prefix

Prefix / , /aaa , /aaa/bbb /ccc Yes, matches / prefix

Prefix /aaa /ccc No, uses default backend

Mixed /foo (Prefix), /foo (Exact) /foo Yes, prefers Exact

Multiple matches
In some cases, multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest
matching path. If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type.

Hostname wildcards
Hosts can be precise matches (for example “ foo.bar.com ”) or a wildcard (for example “ *.foo.com ”). Precise matches require that the
HTTP host header matches the host field. Wildcard matches require the HTTP host header is equal to the suffix of the wildcard
rule.

Host Host header Match?

*.foo.com bar.foo.com Matches based on shared suffix

*.foo.com baz.bar.foo.com No match, wildcard only covers a single DNS label

*.foo.com foo.com No match, wildcard only covers a single DNS label

service/networking/ingress-wildcard-host.yaml

https://kubernetes.io/docs/concepts/_print/ 224/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80

Ingress class
Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a
reference to an IngressClass resource that contains additional configuration including the name of the controller that should
implement the class.

service/networking/external-lb.yaml

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com
kind: IngressParameters
name: external-lb

The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that
IngressClass.

The specific type of parameters to use depends on the ingress controller that you specify in the .spec.controller field of the
IngressClass.

IngressClass scope
Depending on your ingress controller, you may be able to use parameters that you set cluster-wide, or just for one namespace.

https://kubernetes.io/docs/concepts/_print/ 225/609
7/10/24, 9:28 AM Concepts | Kubernetes

Cluster Namespaced

The default scope for IngressClass parameters is cluster-wide.

If you set the .spec.parameters field and don't set .spec.parameters.scope , or if you set .spec.parameters.scope to Cluster ,
then the IngressClass refers to a cluster-scoped resource. The kind (in combination the apiGroup ) of the parameters refers to a
cluster-scoped API (possibly a custom resource), and the name of the parameters identifies a specific cluster scoped resource
for that API.

For example:

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb-1
spec:
controller: example.com/ingress-controller
parameters:
# The parameters for this IngressClass are specified in a
# ClusterIngressParameter (API group k8s.example.net) named
# "external-config-1". This definition tells Kubernetes to
# look for a cluster-scoped parameter resource.
scope: Cluster
apiGroup: k8s.example.net
kind: ClusterIngressParameter
name: external-config-1

Deprecated annotation
Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a
kubernetes.io/ingress.class annotation on the Ingress. This annotation was never formally defined, but was widely supported by
Ingress controllers.

The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not a direct equivalent. While the
annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a
reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller.

Default IngressClass
You can mark a particular IngressClass as default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class
annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be
assigned this default IngressClass.

Caution:
If you have more than one IngressClass marked as the default for your cluster, the admission controller prevents creating new
Ingress objects that don't have an ingressClassName specified. You can resolve this by ensuring that at most 1 IngressClass is
marked as default in your cluster.

There are some ingress controllers, that work without the definition of a default IngressClass . For example, the Ingress-NGINX
controller can be configured with a flag --watch-ingress-without-class . It is recommended though, to specify the default
IngressClass :

service/networking/default-ingressclass.yaml

https://kubernetes.io/docs/concepts/_print/ 226/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx-example
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx

Types of Ingress
Ingress backed by a single Service
There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an
Ingress by specifying a default backend with no rules.

service/networking/test-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
defaultBackend:
service:
name: test
port:
number: 80

If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:

kubectl get ingress test-ingress

NAME CLASS HOSTS ADDRESS PORTS AGE


test-ingress external-lb * 203.0.113.123 80 59s

Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.

Note:
Ingress controllers and load balancers may take a minute or two to allocate an IP address. Until that time, you often see the
address listed as <pending>.

Simple fanout
A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested. An
Ingress allows you to keep the number of load balancers down to a minimum. For example, a setup like:

https://kubernetes.io/docs/concepts/_print/ 227/609
7/10/24, 9:28 AM Concepts | Kubernetes

cluster

Pod

/foo Service service1:4200

Pod

Ingress-managed
client load balancer Ingress, 178.91.123.132

Pod

/bar Service service2:8080

Pod

Figure. Ingress Fan Out

It would require an Ingress such as:

service/networking/simple-fanout-example.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 8080

When you create the Ingress with kubectl apply -f :

kubectl describe ingress simple-fanout-example

https://kubernetes.io/docs/concepts/_print/ 228/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: simple-fanout-example
Namespace: default
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:4200 (10.8.0.90:4200)
/bar service2:8080 (10.8.0.91:8080)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 22s loadbalancer-controller default/test

The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services
( service1 , service2 ) exist. When it has done so, you can see the address of the load balancer at the Address field.

Note:
Depending on the Ingress controller you are using, you may need to create a default-http-backend Service.

Name based virtual hosting


Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.

cluster

Pod

Host: foo.bar.com Service service1:80

Pod

Ingress-managed
client load balancer Ingress, 178.91.123.132

Pod

Host: bar.foo.com Service service2:80

Pod

Figure. Ingress Name Based Virtual hosting

The following Ingress tells the backing load balancer to route requests based on the Host header.

service/networking/name-virtual-host-ingress.yaml

https://kubernetes.io/docs/concepts/_print/ 229/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service1
port:
number: 80
- host: bar.foo.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service2
port:
number: 80

If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress
controller can be matched without a name based virtual host being required.

For example, the following Ingress routes traffic requested for first.bar.com to service1 , second.bar.com to service2 , and any
traffic whose request host header doesn't match first.bar.com and second.bar.com to service3 .

service/networking/name-virtual-host-ingress-no-third-host.yaml

https://kubernetes.io/docs/concepts/_print/ 230/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress-no-third-host
spec:
rules:
- host: first.bar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service1
port:
number: 80
- host: second.bar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service2
port:
number: 80
- http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service3
port:
number: 80

TLS
You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a
single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext). If the TLS
configuration section in an Ingress specifies different hosts, they are multiplexed on the same port according to the hostname
specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named
tls.crt and tls.key that contain the certificate and private key to use for TLS. For example:

apiVersion: v1
kind: Secret
metadata:
name: testsecret-tls
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls

Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS.
You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully
Qualified Domain Name (FQDN) for https-example.foo.com .

Note:

https://kubernetes.io/docs/concepts/_print/ 231/609
7/10/24, 9:28 AM Concepts | Kubernetes

Keep in mind that TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-
domains. Therefore, hosts in the tls section need to explicitly match the host in the rules section.

service/networking/tls-example-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80

Note:
There is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on nginx, GCE, or
any other platform specific Ingress controller to understand how TLS works in your environment.

Load balancing
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load
balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic
weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.

It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in
Kubernetes such as readiness probes that allow you to achieve the same end result. Please review the controller specific
documentation to see how they handle health checks (for example: nginx, or GCE).

Updating an Ingress
To update an existing Ingress to add a new Host, you can update it by editing the resource:

kubectl describe ingress test

https://kubernetes.io/docs/concepts/_print/ 232/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: test
Namespace: default
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:80 (10.8.0.90:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 35s loadbalancer-controller default/test

kubectl edit ingress test

This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:

spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
service:
name: service1
port:
number: 80
path: /foo
pathType: Prefix
- host: bar.baz.com
http:
paths:
- backend:
service:
name: service2
port:
number: 80
path: /foo
pathType: Prefix
..

After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load
balancer.

Verify this:

kubectl describe ingress test

https://kubernetes.io/docs/concepts/_print/ 233/609
7/10/24, 9:28 AM Concepts | Kubernetes

Name: test
Namespace: default
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:80 (10.8.0.90:80)
bar.baz.com
/foo service2:80 (10.8.0.91:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 45s loadbalancer-controller default/test

You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.

Failing across availability zones


Techniques for spreading traffic across failure domains differ between cloud providers. Please check the documentation of the
relevant Ingress controller for details.

Alternatives
You can expose a Service in multiple ways that don't directly involve the Ingress resource:

Use Service.Type=LoadBalancer
Use Service.Type=NodePort

What's next
Learn about the Ingress API
Learn about Ingress controllers
Set up Ingress on Minikube with the NGINX Controller

https://kubernetes.io/docs/concepts/_print/ 234/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.3 - Ingress Controllers


In order for an Ingress to work in your cluster, there must be an ingress controller running. You need to select
at least one ingress controller and make sure it is set up in your cluster. This page lists common ingress
controllers that you can deploy.

In order for the Ingress resource to work, the cluster must have an ingress controller running.

Unlike other types of controllers which run as part of the kube-controller-manager binary, Ingress controllers are not started
automatically with a cluster. Use this page to choose the ingress controller implementation that best fits your cluster.

Kubernetes as a project supports and maintains AWS, GCE, and nginx ingress controllers.

Additional controllers
Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project
authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide
before submitting a change. More information.

AKS Application Gateway Ingress Controller is an ingress controller that configures the Azure Application Gateway.
Alibaba Cloud MSE Ingress is an ingress controller that configures the Alibaba Cloud Native Gateway, which is also the
commercial version of Higress.
Apache APISIX ingress controller is an Apache APISIX-based ingress controller.
Avi Kubernetes Operator provides L4-L7 load-balancing using VMware NSX Advanced Load Balancer.
BFE Ingress Controller is a BFE-based ingress controller.
Cilium Ingress Controller is an ingress controller powered by Cilium.
The Citrix ingress controller works with Citrix Application Delivery Controller.
Contour is an Envoy based ingress controller.
Emissary-Ingress API Gateway is an Envoy-based ingress controller.
EnRoute is an Envoy based API gateway that can run as an ingress controller.
Easegress IngressController is an Easegress based API gateway that can run as an ingress controller.
F5 BIG-IP Container Ingress Services for Kubernetes lets you use an Ingress to configure F5 BIG-IP virtual servers.
FortiADC Ingress Controller support the Kubernetes Ingress resources and allows you to manage FortiADC objects from
Kubernetes
Gloo is an open-source ingress controller based on Envoy, which offers API gateway functionality.
HAProxy Ingress is an ingress controller for HAProxy.
Higress is an Envoy based API gateway that can run as an ingress controller.
The HAProxy Ingress Controller for Kubernetes is also an ingress controller for HAProxy.
Istio Ingress is an Istio based ingress controller.
The Kong Ingress Controller for Kubernetes is an ingress controller driving Kong Gateway.
Kusk Gateway is an OpenAPI-driven ingress controller based on Envoy.
The NGINX Ingress Controller for Kubernetes works with the NGINX webserver (as a proxy).
The ngrok Kubernetes Ingress Controller is an open source controller for adding secure public access to your K8s services using
the ngrok platform.
The OCI Native Ingress Controller is an Ingress controller for Oracle Cloud Infrastructure which allows you to manage the OCI
Load Balancer.
OpenNJet Ingress Controller is a OpenNJet-based ingress controller.
The Pomerium Ingress Controller is based on Pomerium, which offers context-aware access policy.
Skipper HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a
library to build your custom proxy.
The Traefik Kubernetes Ingress provider is an ingress controller for the Traefik proxy.
Tyk Operator extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works
with the Open Source Tyk Gateway & Tyk Cloud control plane.
Voyager is an ingress controller for HAProxy.
https://kubernetes.io/docs/concepts/_print/ 235/609
7/10/24, 9:28 AM Concepts | Kubernetes

Wallarm Ingress Controller is an Ingress Controller that provides WAAP (WAF) and API Security capabilities.

Using multiple Ingress controllers


You may deploy any number of ingress controllers using ingress class within a cluster. Note the .metadata.name of your ingress class
resource. When you create an ingress you would need that name to specify the ingressClassName field on your Ingress object (refer
to IngressSpec v1 reference). ingressClassName is a replacement of the older annotation method.

If you do not specify an IngressClass for an Ingress, and your cluster has exactly one IngressClass marked as default, then
Kubernetes applies the cluster's default IngressClass to the Ingress. You mark an IngressClass as default by setting the
ingressclass.kubernetes.io/is-default-class annotation on that IngressClass, with the string value "true" .

Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently.

Note:
Make sure you review your ingress controller's documentation to understand the caveats of choosing it.

What's next
Learn more about Ingress.
Set up Ingress on Minikube with the NGINX Controller.

https://kubernetes.io/docs/concepts/_print/ 236/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.4 - Gateway API


Gateway API is a family of API kinds that provide dynamic infrastructure provisioning and advanced traffic
routing.

Make network services available by using an extensible, role-oriented, protocol-aware configuration mechanism. Gateway API is an
add-on containing API kinds that provide dynamic infrastructure provisioning and advanced traffic routing.

Design principles
The following principles shaped the design and architecture of Gateway API:

Role-oriented: Gateway API kinds are modeled after organizational roles that are responsible for managing Kubernetes service
networking:
Infrastructure Provider: Manages infrastructure that allows multiple isolated clusters to serve multiple tenants, e.g. a
cloud provider.
Cluster Operator: Manages clusters and is typically concerned with policies, network access, application permissions, etc.
Application Developer: Manages an application running in a cluster and is typically concerned with application-level
configuration and Service composition.
Portable: Gateway API specifications are defined as custom resources and are supported by many implementations.
Expressive: Gateway API kinds support functionality for common traffic routing use cases such as header-based matching,
traffic weighting, and others that were only possible in Ingress by using custom annotations.
Extensible: Gateway allows for custom resources to be linked at various layers of the API. This makes granular customization
possible at the appropriate places within the API structure.

Resource model
Gateway API has three stable API kinds:

GatewayClass: Defines a set of gateways with common configuration and managed by a controller that implements the class.

Gateway: Defines an instance of traffic handling infrastructure, such as cloud load balancer.

HTTPRoute: Defines HTTP-specific rules for mapping traffic from a Gateway listener to a representation of backend network
endpoints. These endpoints are often represented as a Service.

Gateway API is organized into different API kinds that have interdependent relationships to support the role-oriented nature of
organizations. A Gateway object is associated with exactly one GatewayClass; the GatewayClass describes the gateway controller
responsible for managing Gateways of this class. One or more route kinds such as HTTPRoute, are then associated to Gateways. A
Gateway can filter the routes that may be attached to its listeners , forming a bidirectional trust model with routes.

The following figure illustrates the relationships of the three stable Gateway API kinds:

cluster

HTTPRoute Gateway GatewayClass

GatewayClass
Gateways can be implemented by different controllers, often with different configurations. A Gateway must reference a
GatewayClass that contains the name of the controller that implements the class.

A minimal GatewayClass example:

https://kubernetes.io/docs/concepts/_print/ 237/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: example-class
spec:
controllerName: example.com/gateway-controller

In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name
example.com/gateway-controller . Gateways of this class will be managed by the implementation's controller.

See the GatewayClass reference for a full definition of this API kind.

Gateway
A Gateway describes an instance of traffic handling infrastructure. It defines a network endpoint that can be used for processing
traffic, i.e. filtering, balancing, splitting, etc. for backends such as a Service. For example, a Gateway may represent a cloud load
balancer or an in-cluster proxy server that is configured to accept HTTP traffic.

A minimal Gateway resource example:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: example-class
listeners:
- name: http
protocol: HTTP
port: 80

In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the
addresses field is unspecified, an address or hostname is assigned to the Gateway by the implementation's controller. This address
is used as a network endpoint for processing traffic of backend network endpoints defined in routes.

See the Gateway reference for a full definition of this API kind.

HTTPRoute
The HTTPRoute kind specifies routing behavior of HTTP requests from a Gateway listener to backend network endpoints. For a
Service backend, an implementation may represent the backend network endpoint as a Service IP or the backing Endpoints of the
Service. An HTTPRoute represents configuration that is applied to the underlying Gateway implementation. For example, defining a
new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server.

A minimal HTTPRoute example:

https://kubernetes.io/docs/concepts/_print/ 238/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example-httproute
spec:
parentRefs:
- name: example-gateway
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /login
backendRefs:
- name: example-svc
port: 8080

In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path
specified as /login will be routed to Service example-svc on port 8080 .

See the HTTPRoute reference for a full definition of this API kind.

Request flow
Here is a simple example of HTTP traffic being routed to a Service by using a Gateway and an HTTPRoute:

In this example, the request flow for a Gateway implemented as a reverse proxy is:

1. The client starts to prepare an HTTP request for the URL http://www.example.com
2. The client's DNS resolver queries for the destination name and learns a mapping to one or more IP addresses associated with
the Gateway.
3. The client sends a request to the Gateway IP address; the reverse proxy receives the HTTP request and uses the Host: header
to match a configuration that was derived from the Gateway and attached HTTPRoute.
4. Optionally, the reverse proxy can perform request header and/or path matching based on match rules of the HTTPRoute.
5. Optionally, the reverse proxy can modify the request; for example, to add or remove headers, based on filter rules of the
HTTPRoute.
6. Lastly, the reverse proxy forwards the request to one or more backends.

Conformance
Gateway API covers a broad set of features and is widely implemented. This combination requires clear conformance definitions and
tests to ensure that the API provides a consistent experience wherever it is used.

See the conformance documentation to understand details such as release channels, support levels, and running conformance
tests.

Migrating from Ingress


Gateway API is the successor to the Ingress API. However, it does not include the Ingress kind. As a result, a one-time conversion
from your existing Ingress resources to Gateway API resources is necessary.

Refer to the ingress migration guide for details on migrating Ingress resources to Gateway API resources.
https://kubernetes.io/docs/concepts/_print/ 239/609
7/10/24, 9:28 AM Concepts | Kubernetes

What's next
Instead of Gateway API resources being natively implemented by Kubernetes, the specifications are defined as Custom Resources
supported by a wide range of implementations. Install the Gateway API CRDs or follow the installation instructions of your selected
implementation. After installing an implementation, use the Getting Started guide to help you quickly start working with Gateway
API.

Note:
Make sure to review the documentation of your selected implementation to understand any caveats.

Refer to the API specification for additional details of all Gateway API kinds.

https://kubernetes.io/docs/concepts/_print/ 240/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.5 - EndpointSlices
The EndpointSlice API is the mechanism that Kubernetes uses to let your Service scale to handle large
numbers of backends, and allows the cluster to update its list of healthy backends efficiently.

ⓘ FEATURE STATE: Kubernetes v1.21 [stable]

Kubernetes' EndpointSlice API provides a way to track network endpoints within a Kubernetes cluster. EndpointSlices offer a more
scalable and extensible alternative to Endpoints.

EndpointSlice API
In Kubernetes, an EndpointSlice contains references to a set of network endpoints. The control plane automatically creates
EndpointSlices for any Kubernetes Service that has a selector specified. These EndpointSlices include references to all the Pods that
match the Service selector. EndpointSlices group network endpoints together by unique combinations of protocol, port number, and
Service name. The name of a EndpointSlice object must be a valid DNS subdomain name.

As an example, here's a sample EndpointSlice object, that's owned by the example Kubernetes Service.

apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: example-abc
labels:
kubernetes.io/service-name: example
addressType: IPv4
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "10.1.2.3"
conditions:
ready: true
hostname: pod-1
nodeName: node-1
zone: us-west2-a

By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this
with the --max-endpoints-per-slice kube-controller-manager flag, up to a maximum of 1000.

EndpointSlices can act as the source of truth for kube-proxy when it comes to how to route internal traffic.

Address types
EndpointSlices support three address types:

IPv4
IPv6
FQDN (Fully Qualified Domain Name)

Each EndpointSlice object represents a specific IP address type. If you have a Service that is available via IPv4 and IPv6, there will be
at least two EndpointSlice objects (one for IPv4, and one for IPv6).

https://kubernetes.io/docs/concepts/_print/ 241/609
7/10/24, 9:28 AM Concepts | Kubernetes

Conditions
The EndpointSlice API stores conditions about endpoints that may be useful for consumers. The three conditions are ready ,
serving , and terminating .

Ready
readyis a condition that maps to a Pod's Ready condition. A running Pod with the Ready condition set to True should have this
EndpointSlice condition also set to true . For compatibility reasons, ready is NEVER true when a Pod is terminating. Consumers
should refer to the serving condition to inspect the readiness of terminating Pods. The only exception to this rule is for Services
with spec.publishNotReadyAddresses set to true . Endpoints for these Services will always have the ready condition set to true .

Serving

ⓘ FEATURE STATE: Kubernetes v1.26 [stable]

The serving condition is almost identical to the ready condition. The difference is that consumers of the EndpointSlice API should
check the serving condition if they care about pod readiness while the pod is also terminating.

Note:
Although serving is almost identical to ready, it was added to prevent breaking the existing meaning of ready. It may be
unexpected for existing clients if ready could be true for terminating endpoints, since historically terminating endpoints were
never included in the Endpoints or EndpointSlice API to begin with. For this reason, ready is always false for terminating
endpoints, and a new condition serving was added in v1.20 so that clients can track readiness for terminating pods independent
of the existing semantics for ready.

Terminating

ⓘ FEATURE STATE: Kubernetes v1.22 [beta]

Terminating is a condition that indicates whether an endpoint is terminating. For pods, this is any pod that has a deletion timestamp
set.

Topology information
Each endpoint within an EndpointSlice can contain relevant topology information. The topology information includes the location of
the endpoint and information about the corresponding Node and zone. These are available in the following per endpoint fields on
EndpointSlices:

nodeName - The name of the Node this endpoint is on.


zone - The zone this endpoint is in.

Note:
In the v1 API, the per endpoint topology was effectively removed in favor of the dedicated fields nodeName and zone .

Setting arbitrary topology fields on the endpoint field of an EndpointSlice resource has been deprecated and is not supported
in the v1 API. Instead, the v1 API supports setting individual nodeName and zone fields. These fields are automatically translated
between API versions. For example, the value of the "topology.kubernetes.io/zone" key in the topology field in the v1beta1 API
is accessible as the zone field in the v1 API.

Management
Most often, the control plane (specifically, the endpoint slice controller) creates and manages EndpointSlice objects. There are a
variety of other use cases for EndpointSlices, such as service mesh implementations, that could result in other entities or controllers
managing additional sets of EndpointSlices.

https://kubernetes.io/docs/concepts/_print/ 242/609
7/10/24, 9:28 AM Concepts | Kubernetes

To ensure that multiple entities can manage EndpointSlices without interfering with each other, Kubernetes defines the label
endpointslice.kubernetes.io/managed-by , which indicates the entity managing an EndpointSlice. The endpoint slice controller sets
endpointslice-controller.k8s.io as the value for this label on all EndpointSlices it manages. Other entities managing EndpointSlices
should also set a unique value for this label.

Ownership
In most use cases, EndpointSlices are owned by the Service that the endpoint slice object tracks endpoints for. This ownership is
indicated by an owner reference on each EndpointSlice as well as a kubernetes.io/service-name label that enables simple lookups of
all EndpointSlices belonging to a Service.

EndpointSlice mirroring
In some cases, applications create custom Endpoints resources. To ensure that these applications do not need to concurrently write
to both Endpoints and EndpointSlice resources, the cluster's control plane mirrors most Endpoints resources to corresponding
EndpointSlices.

The control plane mirrors Endpoints resources unless:

the Endpoints resource has a endpointslice.kubernetes.io/skip-mirror label set to true .


the Endpoints resource has a control-plane.alpha.kubernetes.io/leader annotation.
the corresponding Service resource does not exist.
the corresponding Service resource has a non-nil selector.

Individual Endpoints resources may translate into multiple EndpointSlices. This will occur if an Endpoints resource has multiple
subsets or includes endpoints with multiple IP families (IPv4 and IPv6). A maximum of 1000 addresses per subset will be mirrored to
EndpointSlices.

Distribution of EndpointSlices
Each EndpointSlice has a set of ports that applies to all endpoints within the resource. When named ports are used for a Service,
Pods may end up with different target port numbers for the same named port, requiring different EndpointSlices. This is similar to
the logic behind how subsets are grouped with Endpoints.

The control plane tries to fill EndpointSlices as full as possible, but does not actively rebalance them. The logic is fairly
straightforward:

1. Iterate through existing EndpointSlices, remove endpoints that are no longer desired and update matching endpoints that have
changed.
2. Iterate through EndpointSlices that have been modified in the first step and fill them up with any new endpoints needed.
3. If there's still new endpoints left to add, try to fit them into a previously unchanged slice and/or create new ones.

Importantly, the third step prioritizes limiting EndpointSlice updates over a perfectly full distribution of EndpointSlices. As an
example, if there are 10 new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each, this approach will create
a new EndpointSlice instead of filling up the 2 existing EndpointSlices. In other words, a single EndpointSlice creation is preferable to
multiple EndpointSlice updates.

With kube-proxy running on each Node and watching EndpointSlices, every change to an EndpointSlice becomes relatively expensive
since it will be transmitted to every Node in the cluster. This approach is intended to limit the number of changes that need to be
sent to every Node, even if it may result with multiple EndpointSlices that are not full.

In practice, this less than ideal distribution should be rare. Most changes processed by the EndpointSlice controller will be small
enough to fit in an existing EndpointSlice, and if not, a new EndpointSlice is likely going to be necessary soon anyway. Rolling updates
of Deployments also provide a natural repacking of EndpointSlices with all Pods and their corresponding endpoints getting replaced.

Duplicate endpoints
Due to the nature of EndpointSlice changes, endpoints may be represented in more than one EndpointSlice at the same time. This
naturally occurs as changes to different EndpointSlice objects can arrive at the Kubernetes client watch / cache at different times.

Note:
Clients of the EndpointSlice API must iterate through all the existing EndpointSlices associated to a Service and build a complete
list of unique network endpoints. It is important to mention that endpoints may be duplicated in different EndpointSlices.
https://kubernetes.io/docs/concepts/_print/ 243/609
7/10/24, 9:28 AM Concepts | Kubernetes

You can find a reference implementation for how to perform this endpoint aggregation and deduplication as part of the
EndpointSliceCache code within kube-proxy .

Comparison with Endpoints


The original Endpoints API provided a simple and straightforward way of tracking network endpoints in Kubernetes. As Kubernetes
clusters and Services grew to handle more traffic and to send more traffic to more backend Pods, the limitations of that original API
became more visible. Most notably, those included challenges with scaling to larger numbers of network endpoints.

Since all network endpoints for a Service were stored in a single Endpoints object, those Endpoints objects could get quite large. For
Services that stayed stable (the same set of endpoints over a long period of time) the impact was less noticeable; even then, some
use cases of Kubernetes weren't well served.

When a Service had a lot of backend endpoints and the workload was either scaling frequently, or rolling out new changes
frequently, each update to the single Endpoints object for that Service meant a lot of traffic between Kubernetes cluster components
(within the control plane, and also between nodes and the API server). This extra traffic also had a cost in terms of CPU use.

With EndpointSlices, adding or removing a single Pod triggers the same number of updates to clients that are watching for changes,
but the size of those update message is much smaller at large scale.

EndpointSlices also enabled innovation around new features such dual-stack networking and topology-aware routing.

What's next
Follow the Connecting Applications with Services tutorial
Read the API reference for the EndpointSlice API
Read the API reference for the Endpoints API

https://kubernetes.io/docs/concepts/_print/ 244/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.6 - Network Policies


If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to
specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster
must use a network plugin that supports NetworkPolicy enforcement.

If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols, then you might consider using
Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which
allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid
overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the
network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.

The entities that a Pod can communicate with are identified through a combination of the following three identifiers:

1. Other pods that are allowed (exception: a pod cannot block access to itself)
2. Namespaces that are allowed
3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the
Pod or the node)

When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s)
that match the selector.

Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).

Prerequisites
Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which
supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.

The two sorts of pod isolation


There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. They concern what connections may be
established. "Isolation" here is not absolute, rather it means "some restrictions apply". The alternative, "non-isolated for $direction",
means that no restrictions apply in the stated direction. The two sorts of isolation (or not) are declared independently, and are both
relevant for a connection from one pod to another.

By default, a pod is non-isolated for egress; all outbound connections are allowed. A pod is isolated for egress if there is any
NetworkPolicy that both selects the pod and has "Egress" in its policyTypes ; we say that such a policy applies to the pod for egress.
When a pod is isolated for egress, the only allowed connections from the pod are those allowed by the egress list of some
NetworkPolicy that applies to the pod for egress. Reply traffic for those allowed connections will also be implicitly allowed. The
effects of those egress lists combine additively.

By default, a pod is non-isolated for ingress; all inbound connections are allowed. A pod is isolated for ingress if there is any
NetworkPolicy that both selects the pod and has "Ingress" in its policyTypes ; we say that such a policy applies to the pod for ingress.
When a pod is isolated for ingress, the only allowed connections into the pod are those from the pod's node and those allowed by
the ingress list of some NetworkPolicy that applies to the pod for ingress. Reply traffic for those allowed connections will also be
implicitly allowed. The effects of those ingress lists combine additively.

Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections
allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect
the policy result.

For a connection from a source pod to a destination pod to be allowed, both the egress policy on the source pod and the ingress
policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen.

The NetworkPolicy resource


See the NetworkPolicy reference for a full definition of the resource.

An example NetworkPolicy might look like this:


https://kubernetes.io/docs/concepts/_print/ 245/609
7/10/24, 9:28 AM Concepts | Kubernetes

service/networking/networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

Note:
POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network
policy.

Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion , kind , and metadata fields. For general
information about working with config files, see Configure a Pod to Use a ConfigMap, and Object Management.

spec: NetworkPolicy spec has all the information needed to define a particular network policy in the given namespace.

podSelector: Each NetworkPolicy includes a podSelector which selects the grouping of pods to which the policy applies. The
example policy selects pods with the label "role=db". An empty podSelector selects all pods in the namespace.

policyTypes: Each NetworkPolicy includes a policyTypes list which may include either Ingress , Egress , or both. The policyTypes
field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If
no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the
NetworkPolicy has any egress rules.

ingress: Each NetworkPolicy may include a list of allowed ingress rules. Each rule allows traffic which matches both the from and
ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first
specified via an ipBlock , the second via a namespaceSelector and the third via a podSelector .

egress: Each NetworkPolicy may include a list of allowed egress rules. Each rule allows traffic which matches both the to and
ports sections. The example policy contains a single rule, which matches traffic on a single port to any destination in 10.0.0.0/24 .

https://kubernetes.io/docs/concepts/_print/ 246/609
7/10/24, 9:28 AM Concepts | Kubernetes

So, the example NetworkPolicy:

1. isolates role=db pods in the default namespace for both ingress and egress traffic (if they weren't already isolated)

2. (Ingress rules) allows connections to all pods in the default namespace with the label role=db on TCP port 6379 from:

any pod in the default namespace with the label role=frontend


any pod in a namespace with the label project=myproject
IP addresses in the ranges 172.17.0.0 – 172.17.0.255 and 172.17.2.0 – 172.17.255.255 (ie, all of 172.17.0.0/16 except
172.17.1.0/24 )

3. (Egress rules) allows connections from any pod in the default namespace with the label role=db to CIDR 10.0.0.0/24 on TCP
port 5978

See the Declare Network Policy walkthrough for further examples.

Behavior of to and from selectors


There are four kinds of selectors that can be specified in an ingress from section or egress to section:

podSelector: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources
or egress destinations.

namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress
destinations.

namespaceSelector and podSelector: A single to / from entry that specifies both namespaceSelector and podSelector selects
particular Pods within particular namespaces. Be careful to use correct YAML syntax. For example:

...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
podSelector:
matchLabels:
role: client
...

This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label
user=alice . But the following policy is different:

...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
- podSelector:
matchLabels:
role: client
...

It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client , or
from any Pod in any namespace with the label user=alice .

When in doubt, use kubectl describe to see how Kubernetes has interpreted the policy.

ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external
IPs, since Pod IPs are ephemeral and unpredictable.

https://kubernetes.io/docs/concepts/_print/ 247/609
7/10/24, 9:28 AM Concepts | Kubernetes

Cluster ingress and egress mechanisms often require rewriting the source or destination IP of packets. In cases where this happens,
it is not defined whether this happens before or after NetworkPolicy processing, and the behavior may be different for different
combinations of network plugin, cloud provider, Service implementation, etc.

In the case of ingress, this means that in some cases you may be able to filter incoming packets based on the actual original source
IP, while in other cases, the "source IP" that the NetworkPolicy acts on may be the IP of a LoadBalancer or of the Pod's node, etc.

For egress, this means that connections from pods to Service IPs that get rewritten to cluster-external IPs may or may not be
subject to ipBlock -based policies.

Default policies
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The
following examples let you change the default behavior in that namespace.

Default deny all ingress traffic


You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not
allow any ingress traffic to those pods.

service/networking/network-policy-default-deny-ingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress

This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not
affect isolation for egress from any pod.

Allow all ingress traffic


If you want to allow all incoming connections to all pods in a namespace, you can create a policy that explicitly allows that.

service/networking/network-policy-allow-all-ingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress

https://kubernetes.io/docs/concepts/_print/ 248/609
7/10/24, 9:28 AM Concepts | Kubernetes

With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy
has no effect on isolation for egress from any pod.

Default deny all egress traffic


You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not
allow any egress traffic from those pods.

service/networking/network-policy-default-deny-egress.yaml

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress

This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not
change the ingress isolation behavior of any pod.

Allow all egress traffic


If you want to allow all connections from all pods in a namespace, you can create a policy that explicitly allows all outgoing
connections from pods in that namespace.

service/networking/network-policy-allow-all-egress.yaml

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress

With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy
has no effect on isolation for ingress to any pod.

Default deny all ingress and all egress traffic


You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following
NetworkPolicy in that namespace.

service/networking/network-policy-default-deny-all.yaml

https://kubernetes.io/docs/concepts/_print/ 249/609
7/10/24, 9:28 AM Concepts | Kubernetes

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.

Network traffic filtering


NetworkPolicy is defined for layer 4 connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
across network plugins.

Note:
You must be using a CNI plugin that supports SCTP protocol NetworkPolicies.

When a deny all network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP connections. For other protocols, such
as ARP or ICMP, the behaviour is undefined. The same applies to allow rules: when a specific pod is allowed as ingress source or
egress destination, it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
network plugins and denied by others.

Targeting a range of ports


ⓘ FEATURE STATE: Kubernetes v1.25 [stable]

When writing a NetworkPolicy, you can target a range of ports instead of a single port.

This is achievable with the usage of the endPort field, as the following example:

service/networking/networkpolicy-multiport-egress.yaml

https://kubernetes.io/docs/concepts/_print/ 250/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: multi-port-egress
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 32000
endPort: 32768

The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range
10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and 32768.

The following restrictions apply when using this field:

The field must be equal to or greater than the


endPort port field.
endPort can only be defined if port is also defined.

Both ports must be numeric.

Note:
Your cluster must be using a CNI plugin that supports the endPort field in NetworkPolicy specifications. If your network plugin
does not support the endPort field and you specify a NetworkPolicy with that, the policy will be applied only for the single port
field.

Targeting multiple namespaces by label


In this scenario, your Egress NetworkPolicy targets more than one namespace using their label names. For this to work, you need to
label the target namespaces. For example:

kubectl label namespace frontend namespace=frontend


kubectl label namespace backend namespace=backend

Add the labels under namespaceSelector in your NetworkPolicy document. For example:

https://kubernetes.io/docs/concepts/_print/ 251/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-namespaces
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchExpressions:
- key: namespace
operator: In
values: ["frontend", "backend"]

Note:
It is not possible to directly specify the name of the namespaces in a NetworkPolicy. You must use a namespaceSelector with
matchLabels or matchExpressions to select the namespaces based on their labels.

Targeting a Namespace by its name


The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the
namespace name.

While NetworkPolicy cannot target a namespace by its name with some object field, you can use the standardized label to target a
specific namespace.

Pod lifecycle
Note:
The following applies to clusters with a conformant networking plugin and a conformant implementation of NetworkPolicy.

When a new NetworkPolicy object is created, it may take some time for a network plugin to handle the new object. If a pod that is
affected by a NetworkPolicy is created before the network plugin has completed NetworkPolicy handling, that pod may be started
unprotected, and isolation rules will be applied when the NetworkPolicy handling is completed.

Once the NetworkPolicy is handled by a network plugin,

1. All newly created pods affected by a given NetworkPolicy will be isolated before they are started. Implementations of
NetworkPolicy must ensure that filtering is effective throughout the Pod lifecycle, even from the very first instant that any
container in that Pod is started. Because they are applied at Pod level, NetworkPolicies apply equally to init containers, sidecar
containers, and regular containers.

2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time). In the worst case, a newly
created pod may have no network connectivity at all when it is first started, if isolation rules were already applied, but no allow
rules were applied yet.

Every created NetworkPolicy will be handled by a network plugin eventually, but there is no way to tell from the Kubernetes API
when exactly that happens.

Therefore, pods must be resilient against being started up with different network connectivity than expected. If you need to make
sure the pod can reach certain destinations before being started, you can use an init container to wait for those destinations to be
reachable before kubelet starts the app containers.

https://kubernetes.io/docs/concepts/_print/ 252/609
7/10/24, 9:28 AM Concepts | Kubernetes

Every NetworkPolicy will be applied to all selected pods eventually. Because the network plugin may implement NetworkPolicy in a
distributed manner, it is possible that pods may see a slightly inconsistent view of network policies when the pod is first created, or
when pods or policies change. For example, a newly-created pod that is supposed to be able to reach both Pod A on Node 1 and Pod
B on Node 2 may find that it can reach Pod A immediately, but cannot reach Pod B until a few seconds later.

NetworkPolicy and hostNetwork pods


NetworkPolicy behaviour for hostNetwork pods is undefined, but it should be limited to 2 possibilities:

The network plugin can distinguish hostNetwork pod traffic from all other traffic (including being able to distinguish traffic from
different hostNetwork pods on the same node), and will apply NetworkPolicy to hostNetwork pods just like it does to pod-
network pods.
The network plugin cannot properly distinguish hostNetwork pod traffic, and so it ignores hostNetwork pods when matching
podSelector and namespaceSelector . Traffic to/from hostNetwork pods is treated the same as all other traffic to/from the node
IP. (This is the most common implementation.)

This applies when

1. a hostNetwork pod is selected by spec.podSelector .

...
spec:
podSelector:
matchLabels:
role: client
...

2. a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.

...
ingress:
- from:
- podSelector:
matchLabels:
role: client
...

At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated
as node connections. For example, you can allow traffic from a hostNetwork Pod using an ipBlock rule.

What you can't do with network policies (at least, not yet)
As of Kubernetes 1.30, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement
workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies
(Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes,
its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API.

Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
Anything TLS related (use a service mesh or ingress controller for this).
Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities
specifically).
Targeting of services by name (you can, however, target pods or namespaces by their labels, which is often a viable
workaround).
Creation or management of "Policy requests" that are fulfilled by a third party.
Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects
which can do this).
Advanced policy querying and reachability tooling.
https://kubernetes.io/docs/concepts/_print/ 253/609
7/10/24, 9:28 AM Concepts | Kubernetes

The ability to log network security events (for example connections that are blocked or accepted).
The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add
allow rules).
The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the
ability to block access from their resident node).

NetworkPolicy's impact on existing connections


When the set of NetworkPolicies that applies to an existing connection changes - this could happen either due to a change in
NetworkPolicies or if the relevant labels of the namespaces/pods selected by the policy (both subject and peers) are changed in the
middle of an existing connection - it is implementation defined as to whether the change will take effect for that existing connection
or not. Example: A policy is created that leads to denying a previously allowed connection, the underlying network plugin
implementation is responsible for defining if that new policy will close the existing connections or not. It is recommended not to
modify policies/pods/namespaces in ways that might affect existing connections.

What's next
See the Declare Network Policy walkthrough for further examples.
See more recipes for common scenarios enabled by the NetworkPolicy resource.

https://kubernetes.io/docs/concepts/_print/ 254/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.7 - DNS for Services and Pods


Your workload can discover Services within your cluster using DNS; this page explains how that works.

Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.

Kubernetes publishes information about Pods and Services which is used to program DNS. Kubelet configures Pods' DNS so that
running containers can lookup Services by name rather than IP.

Services defined in the cluster are assigned DNS names. By default, a client Pod's DNS search list includes the Pod's own namespace
and the cluster's default domain.

Namespaces of Services
A DNS query may return different results based on the namespace of the Pod making it. DNS queries that don't specify a namespace
are limited to the Pod's namespace. Access Services in other namespaces by specifying it in the DNS query.

For example, consider a Pod in a test namespace. A data Service is in the prod namespace.

A query for data returns no results, because it uses the Pod's test namespace.

A query for data.prod returns the intended result, because it specifies the namespace.

DNS queries may be expanded using the Pod's /etc/resolv.conf . Kubelet configures this file for each Pod. For example, a query for
just data may be expanded to data.test.svc.cluster.local . The values of the search option are used to expand queries. To learn
more about DNS queries, see the resolv.conf manual page.

nameserver 10.32.0.10
search <namespace>.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local .

DNS Records
What objects get DNS records?

1. Services
2. Pods

The following sections detail the supported DNS record types and layout that is supported. Any other layout or names or queries
that happen to work are considered implementation details and are subject to change without warning. For more up-to-date
specification, see Kubernetes DNS-Based Service Discovery.

Services
A/AAAA records
"Normal" (not headless) Services are assigned DNS A and/or AAAA records, depending on the IP family or families of the Service, with
a name of the form my-svc.my-namespace.svc.cluster-domain.example . This resolves to the cluster IP of the Service.

Headless Services (without a cluster IP) Services are also assigned DNS A and/or AAAA records, with a name of the form my-svc.my-
namespace.svc.cluster-domain.example . Unlike normal Services, this resolves to the set of IPs of all of the Pods selected by the Service.
Clients are expected to consume the set or else use standard round-robin selection from the set.

SRV records
SRV Records are created for named ports that are part of normal or headless services. For each named port, the SRV record has the
form _port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example . For a regular Service, this resolves to the port
number and the domain name: my-svc.my-namespace.svc.cluster-domain.example . For a headless Service, this resolves to multiple
answers, one for each Pod that is backing the Service, and contains the port number and the domain name of the Pod of the form
hostname.my-svc.my-namespace.svc.cluster-domain.example .

https://kubernetes.io/docs/concepts/_print/ 255/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pods
A/AAAA records
Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:

pod-ipv4-address.my-namespace.pod.cluster-domain.example .

For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is
cluster.local , then the Pod has a DNS name:

172-17-0-3.default.pod.cluster.local .

Any Pods exposed by a Service have the following DNS resolution available:

pod-ipv4-address.service-name.my-namespace.svc.cluster-domain.example .

Pod's hostname and subdomain fields


Currently when a Pod is created, its hostname (as observed from within the Pod) is the Pod's metadata.name value.

The Pod spec has an optional hostname field, which can be used to specify a different hostname. When specified, it takes precedence
over the Pod's name to be the hostname of the Pod (again, as observed from within the Pod). For example, given a Pod with
spec.hostname set to "my-host" , the Pod will have its hostname set to "my-host" .

The Pod spec also has an optional subdomain field which can be used to indicate that the pod is part of sub-group of the namespace.
For example, a Pod with spec.hostname set to "foo" , and spec.subdomain set to "bar" , in namespace "my-namespace" , will have its
hostname set to "foo" and its fully qualified domain name (FQDN) set to "foo.bar.my-namespace.svc.cluster.local" (once more, as
observed from within the Pod).

If there exists a headless Service in the same namespace as the Pod, with the same name as the subdomain, the cluster's DNS
Server also returns A and/or AAAA records for the Pod's fully qualified hostname.

Example:

https://kubernetes.io/docs/concepts/_print/ 256/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
name: busybox-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # name is not required for single-port Services
port: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: busybox-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: busybox-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox

Given the above Service "busybox-subdomain" and the Pods which set spec.subdomain to "busybox-subdomain" , the first Pod will see its
own FQDN as "busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example" . DNS serves A and/or AAAA records at that
name, pointing to the Pod's IP. Both Pods " busybox1 " and " busybox2 " will have their own address records.

An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.

Note:
Because A and AAAA records are not created for Pod names, hostname is required for the Pod's A or AAAA record to be created. A
Pod with no hostname but with subdomain will only create the A or AAAA record for the headless Service (busybox-subdomain.my-
namespace.svc.cluster-domain.example), pointing to the Pods' IP addresses. Also, the Pod needs to be ready in order to have a
record unless publishNotReadyAddresses=True is set on the Service.

Pod's setHostnameAsFQDN field

ⓘ FEATURE STATE: Kubernetes v1.22 [stable]

https://kubernetes.io/docs/concepts/_print/ 257/609
7/10/24, 9:28 AM Concepts | Kubernetes

When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have
a Pod with the fully qualified domain name busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example , then by default the
hostname command inside that Pod returns busybox-1 and the hostname --fqdn command returns the FQDN.

When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's
namespace. In this case, both hostname and hostname --fqdn return the Pod's FQDN.

Note:
In Linux, the hostname field of the kernel (the nodename field of struct utsname ) is limited to 64 characters.

If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in Pending status
( ContainerCreating as seen by kubectl ) generating error events, such as Failed to construct FQDN from Pod hostname and
cluster domain, FQDN long-FQDN is too long (64 characters is the max, 70 characters requested). One way of improving user
experience for this scenario is to create an admission webhook controller to control FQDN size when users create top level
objects, for example, Deployment.

Pod's DNS Policy


DNS policies can be set on a per-Pod basis. Currently Kubernetes supports the following Pod-specific DNS policies. These policies are
specified in the dnsPolicy field of a Pod Spec.

" Default ": The Pod inherits the name resolution configuration from the node that the Pods run on. See related discussion for
more details.
" ClusterFirst ": Any DNS query that does not match the configured cluster domain suffix, such as " www.kubernetes.io ", is
forwarded to an upstream nameserver by the DNS server. Cluster administrators may have extra stub-domain and upstream
DNS servers configured. See related discussion for details on how DNS queries are handled in those cases.
" ClusterFirstWithHostNet ": For Pods running with hostNetwork, you should explicitly set its DNS policy to
" ClusterFirstWithHostNet ". Otherwise, Pods running with hostNetwork and "ClusterFirst" will fallback to the behavior of the
"Default" policy.
Note: This is not supported on Windows. See below for details
" None ": It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided
using the dnsConfig field in the Pod Spec. See Pod's DNS config subsection below.

Note:
"Default" is not the default DNS policy. If dnsPolicy is not explicitly specified, then "ClusterFirst" is used.

The example below shows a Pod with its DNS policy set to " ClusterFirstWithHostNet " because it has hostNetwork set to true .

apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet

https://kubernetes.io/docs/concepts/_print/ 258/609
7/10/24, 9:28 AM Concepts | Kubernetes

Pod's DNS Config

ⓘ FEATURE STATE: Kubernetes v1.14 [stable]

Pod's DNS Config allows users more control on the DNS settings for a Pod.

The dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to " None ", the
dnsConfig field has to be specified.

Below are the properties a user can specify in the dnsConfig field:

nameservers: a list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified.
When the Pod's dnsPolicy is set to " None ", the list must contain at least one IP address, otherwise this property is optional.
The servers listed will be combined to the base nameservers generated from the specified DNS policy with duplicate addresses
removed.
searches : a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided
list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are
removed. Kubernetes allows up to 32 search domains.
options : an optional list of objects where each object may have a name property (required) and a value property (optional).
The contents in this property will be merged to the options generated from the specified DNS policy. Duplicate entries are
removed.

The following is an example Pod with custom DNS settings:

service/networking/custom-dns.yaml

apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.0.2.1 # this is an example
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0

When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:

nameserver 192.0.2.1
search ns1.svc.cluster-domain.example my.dns.search.suffix
options ndots:2 edns0

For IPv6 setup, search path and name server should be set up like this:

kubectl exec -it dns-example -- cat /etc/resolv.conf

https://kubernetes.io/docs/concepts/_print/ 259/609
7/10/24, 9:28 AM Concepts | Kubernetes

The output is similar to this:

nameserver 2001:db8:30::a
search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example
options ndots:5

DNS search domain list limits


ⓘ FEATURE STATE: Kubernetes 1.28 [stable]

Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search
domains exceeds 2048. This limit applies to the node's resolver configuration file, the Pod's DNS Config, and the merged DNS Config
respectively.

Note:
Some container runtimes of earlier versions may have their own restrictions on the number of DNS search domains. Depending
on the container runtime environment, the pods with a large number of DNS search domains may get stuck in the pending
state.

It is known that containerd v1.5.5 or earlier and CRI-O v1.21 or earlier have this problem.

DNS resolution on Windows nodes


ClusterFirstWithHostNet is not supported for Pods that run on Windows nodes. Windows treats all names with a . as a FQDN
and skips FQDN resolution.
On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the
Resolve-DNSName powershell cmdlet for name query resolutions is recommended.
On Linux, you have a DNS suffix list, which is used after resolution of a name as fully qualified has failed. On Windows, you can
only have 1 DNS suffix, which is the DNS suffix associated with that Pod's namespace (example: mydns.svc.cluster.local ).
Windows can resolve FQDNs, Services, or network name which can be resolved with this single suffix. For example, a Pod
spawned in the default namespace, will have the DNS suffix default.svc.cluster.local . Inside a Windows Pod, you can
resolve both kubernetes.default.svc.cluster.local and kubernetes , but not the partially qualified names ( kubernetes.default
or kubernetes.default.svc ).

What's next
For guidance on administering DNS configurations, check Configure DNS Service

https://kubernetes.io/docs/concepts/_print/ 260/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.8 - IPv4/IPv6 dual-stack


Kubernetes lets you configure single-stack IPv4 networking, single-stack IPv6 networking, or dual stack
networking with both network families active. This page explains how.

ⓘ FEATURE STATE: Kubernetes v1.23 [stable]

IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to Pods and Services.

IPv4/IPv6 dual-stack networking is enabled by default for your Kubernetes cluster starting in 1.21, allowing the simultaneous
assignment of both IPv4 and IPv6 addresses.

Supported Features
IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:

Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod)
IPv4 and IPv6 enabled Services
Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces

Prerequisites
The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:

Kubernetes 1.20 or later

For information about using dual-stack services with earlier Kubernetes versions, refer to the documentation for that version of
Kubernetes.

Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with
routable IPv4/IPv6 network interfaces)

A network plugin that supports dual-stack networking.

Configure IPv4/IPv6 dual-stack


To configure IPv4/IPv6 dual-stack, set dual-stack cluster network assignments:

kube-apiserver:
--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>

kube-controller-manager:
--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>

--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>

--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6 defaults to /24 for IPv4 and /64 for IPv6


kube-proxy:
--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>

kubelet:
--node-ip=<IPv4 IP>,<IPv6 IP>
This option is required for bare metal dual-stack nodes (nodes that do not define a cloud provider with the --cloud-
provider flag). If you are using a cloud provider and choose to override the node IPs chosen by the cloud provider,
set the --node-ip option.
(The legacy built-in cloud providers do not support dual-stack --node-ip .)

Note:
An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)

https://kubernetes.io/docs/concepts/_print/ 261/609
7/10/24, 9:28 AM Concepts | Kubernetes

An example of an IPv6 CIDR: fdXY:IJKL:MNOP:15::/64 (this shows the format but is not a valid address - see RFC 4193)

Services
You can create Services which can use IPv4, IPv6, or both.

The address family of a Service defaults to the address family of the first service cluster IP range (configured via the --service-
cluster-ip-range flag to the kube-apiserver).

When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you set the
.spec.ipFamilyPolicy field to one of the following values:

SingleStack: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service
cluster IP range.
PreferDualStack : Allocates both IPv4 and IPv6 cluster IPs for the Service when dual-stack is enabled. If dual-stack is not enabled
or supported, it falls back to single-stack behavior.
RequireDualStack : Allocates Service .spec.clusterIPs from both IPv4 and IPv6 address ranges when dual-stack is enabled. If
dual-stack is not enabled or supported, the Service API object creation fails.
Selects the .spec.clusterIP from the list of .spec.clusterIPs based on the address family of the first element in the
.spec.ipFamilies array.

If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the
address families by setting an optional field, .spec.ipFamilies , on the Service.

Note:
The .spec.ipFamilies field is conditionally mutable: you can add or remove a secondary IP address family, but you cannot
change the primary IP address family of an existing Service.

You can set .spec.ipFamilies to any of the following array values:

["IPv4"]

["IPv6"]

["IPv4","IPv6"] (dual stack)


["IPv6","IPv4"] (dual stack)

The first family you list is used for the legacy .spec.clusterIP field.

Dual-stack Service configuration scenarios


These examples demonstrate the behavior of various dual-stack Service configuration scenarios.

Dual-stack options on new Services


1. This Service specification does not explicitly define .spec.ipFamilyPolicy . When you create this Service, Kubernetes assigns a
cluster IP for the Service from the first configured service-cluster-ip-range and sets the .spec.ipFamilyPolicy to SingleStack .
(Services without selectors and headless Services with selectors will behave in this same way.)

service/networking/dual-stack-default-svc.yaml

https://kubernetes.io/docs/concepts/_print/ 262/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

2. This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy . When you create this Service on a dual-
stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the service. The control plane updates the .spec for the
Service to record the IP address assignments. The field .spec.clusterIPs is the primary field, and contains both assigned IP
addresses; .spec.clusterIP is a secondary field with its value calculated from .spec.clusterIPs .

For the .spec.clusterIP field, the control plane records the IP address that is from the same address family as the first
service cluster IP range.
On a single-stack cluster, the .spec.clusterIPs and .spec.clusterIP fields both only list one address.
On a cluster with dual-stack enabled, specifying RequireDualStack in .spec.ipFamilyPolicy behaves the same as
PreferDualStack .

service/networking/dual-stack-preferred-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
ipFamilyPolicy: PreferDualStack
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

3. This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in
.spec.ipFamilyPolicy . When Kubernetes assigns an IPv6 and IPv4 address in .spec.clusterIPs , .spec.clusterIP is set to the
IPv6 address because that is the first element in the .spec.clusterIPs array, overriding the default.

service/networking/dual-stack-preferred-ipfamilies-svc.yaml

https://kubernetes.io/docs/concepts/_print/ 263/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

Dual-stack defaults on existing Services


These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist.
(Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)

1. When dual-stack is enabled on a cluster, existing Services (whether IPv4 or IPv6 ) are configured by the control plane to set
.spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the address family of the existing Service. The existing
Service cluster IP will be stored in .spec.clusterIPs .

service/networking/dual-stack-default-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

You can validate this behavior by using kubectl to inspect an existing service.

kubectl get svc my-service -o yaml

https://kubernetes.io/docs/concepts/_print/ 264/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: 10.0.197.123
clusterIPs:
- 10.0.197.123
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: MyApp
type: ClusterIP
status:
loadBalancer: {}

2. When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set
.spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the address family of the first service cluster IP range
(configured via the --service-cluster-ip-range flag to the kube-apiserver) even though .spec.clusterIP is set to None .

service/networking/dual-stack-default-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

You can validate this behavior by using kubectl to inspect an existing headless service with selectors.

kubectl get svc my-service -o yaml

https://kubernetes.io/docs/concepts/_print/ 265/609
7/10/24, 9:28 AM Concepts | Kubernetes

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: None
clusterIPs:
- None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: MyApp

Switching Services between single-stack and dual-stack


Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.

1. To change a Service from single-stack to dual-stack, change .spec.ipFamilyPolicy from SingleStack to PreferDualStack or
RequireDualStack as desired. When you change this Service from single-stack to dual-stack, Kubernetes assigns the missing
address family so that the Service now has IPv4 and IPv6 addresses.

Edit the Service specification updating the .spec.ipFamilyPolicy from SingleStack to PreferDualStack .

Before:

spec:
ipFamilyPolicy: SingleStack

After:

spec:
ipFamilyPolicy: PreferDualStack

2. To change a Service from dual-stack to single-stack, change .spec.ipFamilyPolicy from PreferDualStack or RequireDualStack to
SingleStack . When you change this Service from dual-stack to single-stack, Kubernetes retains only the first element in the
.spec.clusterIPs array, and sets .spec.clusterIP to that IP address and sets .spec.ipFamilies to the address family of
.spec.clusterIPs .

Headless Services without selector


For Headless Services without selectors and without .spec.ipFamilyPolicy explicitly set, the .spec.ipFamilyPolicy field defaults to
RequireDualStack .

Service type LoadBalancer


To provision a dual-stack load balancer for your Service:

Set the .spec.type field to LoadBalancer


Set .spec.ipFamilyPolicy field to PreferDualStack or RequireDualStack

Note:
https://kubernetes.io/docs/concepts/_print/ 266/609
7/10/24, 9:28 AM Concepts | Kubernetes

To use a dual-stack LoadBalancer type Service, your cloud provider must support IPv4 and IPv6 load balancers.

Egress traffic
If you want to enable egress traffic in order to reach off-cluster destinations (eg. the public Internet) from a Pod that uses non-
publicly routable IPv6 addresses, you need to enable the Pod to use a publicly routed IPv6 address via a mechanism such as
transparent proxying or IP masquerading. The ip-masq-agent project supports IP masquerading on dual-stack clusters.

Note:
Ensure your CNI provider supports IPv6.

Windows support
Kubernetes on Windows does not support single-stack "IPv6-only" networking. However, dual-stack IPv4/IPv6 networking for pods
and nodes with single-family services is supported.

You can use IPv4/IPv6 dual-stack networking with l2bridge networks.

Note:
Overlay (VXLAN) networks on Windows do not support dual-stack networking.

You can read more about the different network modes for Windows within the Networking on Windows topic.

What's next
Validate IPv4/IPv6 dual-stack networking
Enable dual-stack networking using kubeadm

https://kubernetes.io/docs/concepts/_print/ 267/609
7/10/24, 9:28 AM Concepts | Kubernetes

5.9 - Topology Aware Routing


Topology Aware Routing provides a mechanism to help keep network traffic within the zone where it
originated. Preferring same-zone traffic between Pods in your cluster can help with reliability, performance
(network latency and throughput), or cost.

ⓘ FEATURE STATE: Kubernetes v1.23 [beta]

Note:
Prior to Kubernetes 1.27, this feature was known as Topology Aware Hints.

Topology Aware Routing adjusts routing behavior to prefer keeping traffic in the zone it originated from. In some cases this can help
reduce costs or improve network performance.

Motivation
Kubernetes clusters are increasingly deployed in multi-zone environments. Topology Aware Routing provides a mechanism to help
keep traffic within the zone it originated from. When calculating the endpoints for a Service, the EndpointSlice controller considers
the topology (region and zone) of each endpoint and populates the hints field to allocate it to a zone. Cluster components such as
kube-proxy can then consume those hints, and use them to influence how the traffic is routed (favoring topologically closer
endpoints).

Enabling Topology Aware Routing


Note:
Prior to Kubernetes 1.27, this behavior was controlled using the service.kubernetes.io/topology-aware-hints annotation.

You can enable Topology Aware Routing for a Service by setting the service.kubernetes.io/topology-mode annotation to Auto . When
there are enough endpoints available in each zone, Topology Hints will be populated on EndpointSlices to allocate individual
endpoints to specific zones, resulting in traffic being routed closer to where it originated from.

When it works best


This feature works best when:

1. Incoming traffic is evenly distributed


If a large proportion of traffic is originating from a single zone