-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating
Kubernetes Autoscaling
By :
Before diving into advanced tools such as KEDA and Karpenter, I need you to have a solid foundation. Kubernetes ships with built-in autoscaling capabilities that many teams either overlook or misconfigure, and understanding how these work is essential for everything that comes later.
This part covers the fundamentals – not just the theory, but the practical aspects of getting autoscaling to actually work in your cluster. You’ll start by exploring the core concepts and components that make autoscaling possible, then move on to workload autoscaling using the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA). You’ll learn why pod resource requests matter more than you might think, and how monitoring and rightsizing your workloads directly impacts both cluster efficiency and your ability to scale effectively.
The goal here isn’t to make you an expert on the HPA or VPA. Most production environments eventually outgrow these tools. The goal is to understand how Kubernetes thinks about scaling, what metrics matter, and where the native autoscalers fall short. This context is what makes the jump to KEDA and Karpenter make sense, because you’ll see exactly what problems they solve and why they exist in the first place.
By the end of this part, you’ll have a working cluster, hands-on experience with the HPA and VPA, and a clear picture of when native autoscaling is enough and when you need something more sophisticated.
This part has the following chapters:
Change the font size
Change margin width
Change background colour