Day 20 - Deploying an Amazon EKS Cluster Using Custom Terraform Modules
Introduction
In this project, I deployed a complete Amazon EKS environment using Terraform custom modules. The goal of this implementation was to understand how production style Kubernetes infrastructure is organized using reusable Terraform modules instead of a single monolithic configuration file.
The deployment included:
- Custom VPC across 3 Availability Zones
- Public and private subnets
- NAT Gateway
- IAM roles for EKS
- Amazon EKS cluster
- Managed node groups
- Spot and On Demand worker nodes
- IRSA and OIDC provider
- Kubernetes add-ons
- NGINX sample application deployment
- AWS LoadBalancer integration
This project helped me better understand how Kubernetes networking, IAM, Terraform modules, and AWS managed services work together in real-world environments.
Architecture Diagram
Project Structure
day20-eks-custom-modules/
├── main.tf
├── variables.tf
├── outputs.tf
├── provider.tf
├── backend.tf
├── modules/
│ ├── vpc/
│ ├── iam/
│ ├── eks/
│ └── secrets-manager/
└── k8s/
What This Project Proves
This project demonstrates several important AWS and Kubernetes concepts.
| Component | What It Proves |
|---|---|
| Custom Terraform Modules | Infrastructure can be organized in reusable components |
| VPC Module | Kubernetes networking foundation |
| Private Subnets | Secure worker node deployment |
| NAT Gateway | Secure outbound internet access |
| IAM Module | Proper AWS permissions for EKS |
| EKS Module | Managed Kubernetes deployment |
| Managed Node Groups | Automated Kubernetes worker management |
| Spot Nodes | Cost optimization strategy |
| IRSA and OIDC | Secure pod level IAM integration |
| Kubernetes Deployment | Application orchestration |
| LoadBalancer Service | AWS integration with Kubernetes networking |
| Terraform Remote State | Team collaboration and state management |
Understanding the Infrastructure
VPC Module
The VPC module creates the networking layer required by Amazon EKS.
Resources created:
- VPC
- Public subnets
- Private subnets
- Internet Gateway
- NAT Gateway
- Route tables
The worker nodes are deployed inside private subnets for better security.
The public subnets are used by AWS Load Balancers.
Why Public and Private Subnets Are Needed
Public subnets are required for:
- Load Balancers
- NAT Gateway
Private subnets are required for:
- Kubernetes worker nodes
- Internal application communication
- Better security isolation
This is a common enterprise deployment pattern.
IAM Module
The IAM module creates:
- EKS cluster IAM role
- Node group IAM role
- Required AWS managed policies
Amazon EKS requires IAM permissions to interact with EC2, networking, and Kubernetes resources.
EKS Module
The EKS module creates:
- Amazon EKS cluster
- Managed node groups
- Kubernetes add-ons
- OIDC provider
- IRSA integration
The cluster uses:
- On Demand nodes for stable workloads
- Spot nodes for cost optimization
Why Spot Nodes Were Added
Spot instances are significantly cheaper than On Demand EC2 instances.
In this project:
- General node group uses On Demand capacity
- Spot node group uses Spot capacity
The Spot node group also includes Kubernetes taints so workloads are scheduled intentionally.
This is useful for:
- Batch jobs
- Non-critical workloads
- Cost sensitive applications
Kubernetes Add-ons
The project deploys several important EKS add-ons.
| Add-on | Purpose |
|---|---|
| CoreDNS | Internal Kubernetes DNS |
| kube-proxy | Cluster networking |
| VPC CNI | AWS pod networking |
| EBS CSI Driver | Persistent storage integration |
These add-ons are critical for cluster functionality.
Terraform Initialization
I initialized Terraform using:
terraform init
This downloaded:
- AWS provider
- Terraform modules
- Backend configuration
This screenshot proves Terraform initialized correctly.
Terraform Validation
I validated the configuration using:
terraform fmt -recursive
terraform validate
This proves the Terraform syntax is valid.
Terraform Plan
Next, I generated the execution plan.
terraform plan
Terraform identified resources for:
- VPC
- Subnets
- NAT Gateway
- IAM roles
- EKS cluster
- Node groups
- OIDC provider
- Secrets Manager resources
Deploying the Infrastructure
I deployed the infrastructure using:
terraform apply
The EKS deployment took approximately 15 to 20 minutes.
This proves the AWS infrastructure was deployed successfully.
Verifying the EKS Cluster
After deployment, I configured kubectl.
This connected my local kubectl client to the EKS cluster.
Verifying Worker Nodes
I verified worker nodes using:
kubectl get nodes
Expected result:
STATUS = Ready
Verifying Kubernetes System Pods
I verified Kubernetes system pods using:
kubectl get pods -A
Expected pods include:
- CoreDNS
- kube-proxy
- aws-node
This proves the Kubernetes control plane and add-ons are functioning correctly.
Deploying the Sample Application
I deployed a sample NGINX application.
This created:
- Kubernetes deployment
- Kubernetes service
- AWS Load Balancer
Verifying Application Pods
I verified the application pods.
kubectl get pods
Expected status:
Running
This proves the application containers are running successfully.
Verifying the LoadBalancer Service
I verified the Kubernetes service.
kubectl get svc
Expected result:
EXTERNAL-IP
This proves AWS successfully provisioned a Load Balancer for the application.
Testing the Application
Finally, I opened the AWS Load Balancer DNS URL in the browser.
Expected page:
Welcome to nginx!
This proves end-to-end Kubernetes networking and AWS Load Balancer integration worked successfully.
AWS Console Verification
I also verified resources directly in the AWS Console.
Resources checked:
- EKS cluster
- Node groups
- VPC
- Subnets
- NAT Gateway
- Load Balancer
This proves the EKS cluster exists in AWS.
This proves both General and Spot node groups were created.
Cost Considerations
Amazon EKS is powerful but can become expensive if resources are not cleaned up properly.
Major cost contributors:
| Resource | Approximate Cost |
|---|---|
| EKS Control Plane | ~$73/month |
| NAT Gateway | ~$32/month |
| EC2 Worker Nodes | Variable |
| Load Balancer | Variable |
For learning projects, it is important to destroy infrastructure after testing.
Cleanup
I removed all infrastructure using:
terraform destroy
This proves all AWS resources were cleaned up successfully.
Key Learnings
1. Terraform Modules Improve Organization
Breaking infrastructure into reusable modules makes Terraform projects cleaner and easier to maintain.
2. EKS Requires Strong Networking Knowledge
Understanding subnets, route tables, NAT Gateway, and Load Balancers is critical when deploying Kubernetes on AWS.
3. Private Worker Nodes Improve Security
Running worker nodes in private subnets reduces exposure to the internet.
4. Managed Node Groups Simplify Kubernetes Operations
AWS automatically handles scaling, upgrades, and worker node lifecycle management.
5. Spot Instances Help Reduce Cost
Spot nodes provide significant cost savings for non-critical workloads.
6. Kubernetes Integrates Deeply with AWS
EKS automatically integrates with:
- IAM
- VPC
- EC2
- Load Balancers
- Security Groups
Conclusion
This project gave me practical experience with Terraform module design, Kubernetes networking, EKS managed services, IAM integration, and AWS infrastructure automation.
It also helped me understand how production style Kubernetes environments are structured using reusable Terraform modules and secure AWS networking practices.
Comments
Post a Comment