Day 2 – Terraform Providers and Versioning Explained with Real AWS Deployment

 

Introduction

On Day 2 of my 30-Day AWS Terraform Challenge, I moved from understanding concepts to actually running Terraform against AWS.

This day focused on Terraform Providers, versioning, and why controlling versions is critical when working with Infrastructure as Code.

More importantly, I successfully created real AWS resources using Terraform.


AWS Configuration

Before running Terraform, I configured AWS CLI to allow my local environment to connect to my AWS account. First, I installed AWS CLI and verified it. Then I configured credentials as below. 


This confirmed that my local machine was successfully authenticated with AWS.


What are Terraform Providers

Terraform providers act as a bridge between Terraform and external systems like AWS.

Terraform itself does not directly create resources. Instead, it uses providers such as the AWS provider to communicate with cloud APIs.

For AWS, the provider used is hashicorp/aws.


Terraform Core vs Provider Version

One important concept is that Terraform and providers have separate versioning:

  • Terraform Core → main engine
  • Provider → plugin that talks to AWS

They evolve independently, which means version control becomes critical.


Why Versioning Matters

From today’s learning, versioning is important for:

  • Compatibility between Terraform and providers
  • Stability across environments
  • Avoiding breaking changes
  • Reproducibility of infrastructure

If versions are not controlled, the same code can behave differently in different environments.


My Terraform Configuration

For this exercise, I used:

  • AWS provider
  • Random provider
  • Version constraints using ~> operator

Terraform Initialization

When I ran Terraform initialization, it downloaded the required providers.

This step ensures Terraform knows which provider versions to use and locks them for consistency.


Terraform Plan

Next, I ran terraform plan to preview changes.

Terraform showed that it would create:

  • VPC
  • S3 bucket
  • Random ID

This step is critical because it allows you to verify changes before applying them.



Terraform Apply (Real Deployment)

Finally, I ran terraform apply and created real AWS resources.

Resources created:

  • AWS VPC
  • S3 bucket
  • Random ID

This confirmed that Terraform successfully interacted with AWS using the configured provider.



Key Takeaways

  • Providers are essential for Terraform to interact with AWS
  • Terraform core and providers are versioned separately
  • Version constraints help maintain stability
  • Planning step prevents unexpected changes
  • Infrastructure can be created reliably using code

My Reflection

This was the first time I moved from theory to actual execution with Terraform.

Seeing real AWS resources created from a simple configuration reinforced one idea:

Infrastructure as Code is not just about automation. It is about control, consistency, and confidence in what you build.

Video Reference


Jay

Comments

Popular posts from this blog

ASM Integrity check failed with PRCT-1225 and PRCT-1011 errors while creating database using DBCA on Exadata 3 node RAC

Lock Tables in MariaDB

Life is beautiful