Day 18 - Serverless Image Processing with AWS Lambda, S3, and Terraform

For Day 18 of my AWS Terraform learning journey, I built a backend only serverless image processing pipeline using Amazon S3, AWS Lambda, Lambda Layers, and Terraform.

The idea was simple. I upload one image to an S3 bucket, and AWS automatically creates multiple processed versions of that image in another S3 bucket.

There is no frontend in this project. There is no EC2 server. The workflow is completely event driven.

Architecture


The flow works like this:

  1. I upload a sample image to the upload S3 bucket.
  2. S3 sends an ObjectCreated event.
  3. Lambda is triggered automatically.
  4. Lambda uses the Pillow library to process the image.
  5. Five generated image variants are saved into the processed S3 bucket.

Project Structure

The project has a simple structure.

day-18-image-processor/
├── sample.jpg
├── deploy.sh
├── destroy.sh
├── lambda/
├── scripts/
└── terraform/

The sample.jpg file is placed directly in the project root. This makes the test script simple because it always knows where to find the image.


Infrastructure Created by Terraform

Terraform creates the complete backend pipeline.

ResourcePurpose
Upload S3 bucketStores the original image
Processed S3 bucketStores generated image variants
Lambda functionProcesses the uploaded image
Lambda layerProvides the Pillow image processing library
IAM role and policyGives Lambda required permissions
S3 event notificationInvokes Lambda when image is uploaded
CloudWatch logsStores Lambda execution logs

Why Lambda Layer Is Needed

The Lambda function uses Pillow to process images. Pillow is not included by default in the Lambda Python runtime, so it must be packaged separately.

I used a Lambda layer for Pillow. This keeps the dependency separate from the main Lambda function code.

One important issue I faced was with packaging Pillow from Windows. Pillow has native binary files, and Lambda runs on Amazon Linux. So the Pillow layer must be built for the Lambda Linux runtime.

The final package command uses:

--platform manylinux2014_x86_64
--python-version 3.11
--only-binary=:all:

This builds the dependency in a Lambda compatible format.


S3 Trigger

The upload bucket is configured with an S3 ObjectCreated event. Whenever a new image is uploaded, S3 invokes the Lambda function.


This is similar to a database trigger. In a database, inserting a row can trigger another action. Here, uploading a file triggers backend processing.

Testing the Project

After deployment, I placed sample.jpg in the root folder and ran:

./scripts/test_upload.sh

The script uploads the image to the upload bucket, waits for Lambda processing, and lists files from the processed bucket.

The processed bucket should show five generated files:

sample_compressed.jpg
sample_low.jpg
sample_webp.webp
sample_png.png
sample_thumbnail.jpg

Monitoring with CloudWatch

I checked Lambda logs using CloudWatch.

MSYS_NO_PATHCONV=1 aws logs tail "/aws/lambda/LAMBDA_NAME" --region us-east-1 --since 30m

CloudWatch helped me troubleshoot the Lambda execution.

Security

The buckets are private. Public access is blocked. Server side encryption is enabled. IAM permissions are limited.

Lambda can read only from the upload bucket and write only to the processed bucket.

This is the right pattern even for a small demo because it follows least privilege access.

Cleanup

Since S3 versioning is enabled, normal delete is not enough. The destroy script removes current objects, object versions, and delete markers before running Terraform destroy.

./destroy.sh

What I Learned

This project helped me understand how S3 and Lambda work together in a real event driven backend.

I also learned that Lambda dependencies with native binaries must be packaged for the Lambda runtime, not just the local machine.

The biggest takeaway is that serverless architecture is very useful for event based workloads. For this type of use case, I do not need to manage EC2 servers or background workers.

Final Thoughts

Day 18 was a practical serverless automation project.

It connected S3, Lambda, IAM, Lambda Layers, CloudWatch, and Terraform into one working pipeline.

This type of pattern can be extended for real use cases like image optimization, document processing, metadata extraction, or automated file validation.


Video Reference


Jay

Comments

Popular posts from this blog

ASM Integrity check failed with PRCT-1225 and PRCT-1011 errors while creating database using DBCA on Exadata 3 node RAC

Life is beautiful

Lock Tables in MariaDB