Day 28 - Building a Highly Available 3-Tier AWS Application with Terraform and GitHub Actions
For Day 28, the goal was to build a highly available 3-tier application on AWS. The application included a Node.js frontend, a Go backend API, and a PostgreSQL database. My mentor demonstrated this using VS Code and manual Docker commands, but I wanted to take it one step further and deploy it through GitHub Actions.
The objective was not only to create AWS infrastructure, but also to understand how application code, Docker images, Terraform, and AWS services work together in a real deployment workflow.
Architecture Overview
The application was deployed across multiple layers inside a custom VPC.
The public layer contains an internet-facing Application Load Balancer. This is the only entry point exposed to users. The frontend layer runs Node.js containers on EC2 instances managed by an Auto Scaling Group. The backend layer runs Go containers on EC2 instances behind an internal Application Load Balancer. The database layer uses Amazon RDS PostgreSQL in private database subnets.
The application flow is:
User
→ Public ALB
→ Node.js Frontend
→ Internal ALB
→ Go Backend
→ Amazon RDS PostgreSQL
AWS Services Used
This project used the following AWS services:
Amazon VPC
Public and private subnets
Internet Gateway
NAT Gateway
Application Load Balancer
Auto Scaling Groups
Launch Templates
Amazon EC2
Amazon RDS PostgreSQL
AWS Secrets Manager
IAM roles and instance profiles
CloudWatch
Amazon S3 backend for Terraform state
Each service had a specific role. VPC handled network isolation. ALB handled traffic distribution. Auto Scaling Groups handled instance availability. RDS handled the database layer. Secrets Manager stored database credentials. Terraform managed the full infrastructure lifecycle.
CI/CD Workflow
Instead of building and pushing Docker images manually, I used GitHub Actions.
The pipeline performs three major actions:
1. Build and push Docker images
2. Run Terraform plan
3. Run Terraform apply for the dev environment
For this dev environment, no manual approval was required. Every push to the main branch triggered the workflow.
This screenshot proves that Docker build, Docker push, Terraform plan, and Terraform apply were executed from GitHub Actions.
Docker Image Build
The frontend and backend were containerized separately.
The frontend used a Node.js Docker image. The backend used a Go multi-stage Docker build. GitHub Actions built both images and pushed them to Docker Hub.
This helped separate application packaging from infrastructure deployment. The EC2 instances only needed to pull and run the correct Docker images.
This proves that the CI/CD pipeline successfully created deployable application artifacts.
Terraform Infrastructure
Terraform created the AWS infrastructure. The state was stored remotely in an S3 backend.
The backend configuration used:
terraform {backend "s3" {bucket = "jay-terraformstate-bucket"key = "day-28v2/dev/terraform.tfstate"region = "us-east-1"encrypt = trueuse_lockfile = true}}
Remote state was important because GitHub Actions needed a consistent state location. Without remote state, every pipeline run would behave like a fresh deployment.
This proves Terraform state is managed remotely and not tied to a local machine.
Networking Design
The VPC had multiple subnet layers:
Public subnets
Frontend private subnets
Backend private subnets
Database private subnets
Only the public ALB was internet-facing. The frontend instances were private. The backend instances were private. The RDS PostgreSQL database was private.
This is a key production pattern. Users should never directly access backend servers or databases.
This proves the subnet layout and networking design.
Load Balancer Design
There were two Application Load Balancers.
The public ALB received internet traffic and forwarded it to the frontend target group.
The internal ALB received traffic only from the frontend layer and forwarded it to the backend target group.
This separates public access from internal service communication.
Auto Scaling Groups
The frontend and backend were both deployed using Auto Scaling Groups.
The frontend ASG maintained multiple frontend instances. The backend ASG maintained multiple backend instances.
This means if one EC2 instance fails, the Auto Scaling Group can replace it automatically.
Database and Secrets
Amazon RDS PostgreSQL was used for the database tier. The database was not publicly accessible.
Database credentials were stored in AWS Secrets Manager. Backend EC2 instances used an IAM role to read the secret and pass database values into the Go container.
This avoided hardcoding database credentials directly into the application image.
Issue Faced: PostgreSQL SSL Connection
During testing, the frontend was able to reach the backend, but the backend failed to connect to PostgreSQL.
The error was:
pq: no pg_hba.conf entry for host, user, database, no encryption
This meant PostgreSQL required an encrypted connection. The fix was to update the Go backend connection string from:
sslmode=disable
to:
sslmode=require
After rebuilding the backend image and replacing the backend instances, the connection succeeded.
This was a useful troubleshooting lesson. The network path was correct, but the database rejected the connection because SSL was not enabled in the client connection.
Final Validation
The final validation was done from the browser using the public ALB endpoint:
/api-test
The response was:
Backend healthy and PostgreSQL connection successful
This confirmed the full path:
Browser
→ Public ALB
→ Frontend
→ Internal ALB
→ Backend
→ RDS PostgreSQL
What I Learned
This project helped me understand how a real 3-tier application is deployed on AWS. I learned how frontend, backend, and database layers are separated using private networking and load balancers. I also learned how GitHub Actions can automate Docker image builds and Terraform deployments. The PostgreSQL SSL issue helped me understand that successful networking does not always mean successful application connectivity.
Cleanup
Since this project creates billable AWS resources, cleanup is important. I did create a Github actions for destroy as well which ran successfully.
Validate that the following resources are removed:
ALBs
Auto Scaling Groups
EC2 instances
NAT Gateway
RDS instance
Secrets Manager secret
VPC resources
Final Thoughts
Day 28 was one of the most practical projects in this Terraform learning journey. It connected multiple real-world concepts: CI/CD, Docker, Terraform, private networking, load balancing, auto scaling, secrets management, and database connectivity.
The biggest shift was moving from manual deployment to automated deployment. That is closer to how infrastructure is managed in real environments.
Comments
Post a Comment