In Chapter 5: EKS Cluster (Amazon Elastic Kubernetes Service), we deployed the EKS Control Plane—the centralized orchestration engine that manages our containerized workloads—within our secure VPC (Virtual Private Cloud). This control plane serves as the "brain" of our cloud-native application infrastructure.
Now, it's time to provision the Data Plane—the actual compute resources that will run our application containers. These are the worker nodes, organized into Node Groups, which serve as the "production lines" of our Kubernetes factory. It’s on these machines that your application code is deployed, scaled, and executed.
In this chapter, we’ll explore how to define and manage these node groups, customize their capacity per environment, and support blue-green deployments for greater flexibility and resilience.
Even with a powerful orchestration engine like the EKS Control Plane, your workloads still require compute capacity to execute. In Kubernetes, this capacity is provided by worker nodes—and in AWS, these nodes are typically EC2 instances.
These EC2 instances serve as the runtime environment for your containerized applications. Kubernetes schedules and manages your containers across these nodes, ensuring optimal resource utilization and uptime.
Consider a production scenario, such as an online store. You need to address several key operational concerns:
Manually managing individual EC2 instances for this purpose would be complex, error-prone, and inefficient.
This is where Node Groups come in.
Node groups are logical collections of EC2 instances that EKS uses to host your Kubernetes workloads. They provide:
To address the challenges of live updates and safe rollbacks, our project implements a Blue-Green deployment strategy using two separate node groups: