Your Progress
0% Complete
Module Navigation
- 1Getting Started with Infrastructure as Code and Terraform
- 2Terraform Language Features and Project Structure
- 🔒Terraform Modules and State Management
- 🔒Advanced Terraform Techniques and Collaborative Workflows
- 🔒Real-World Terraform Applications and Best Practices
Terraform Mastery: Build and Deploy Infrastructure as Code
From Zero to Hero with HashiCorp Terraform in Just One Week
Day 1: Getting Started with Infrastructure as Code and Terraform
Hey there! Welcome to your first day of what’s going to be an exciting journey into the world of Infrastructure as Code (IaC) with Terraform. I still remember my first encounter with manual infrastructure setup—clicking through endless AWS console screens, forgetting which settings I’d chosen, and then having no way to reproduce my work. Talk about a nightmare! Terraform changed all that for me, and I’m pumped to show you how it’ll do the same for you.
What the Heck is Infrastructure as Code?
Let’s start with the big picture. Infrastructure as Code is exactly what it sounds like—defining your infrastructure (servers, networks, databases, etc.) using code instead of manual processes. Before IaC, we’d click around web interfaces or run commands on terminals to set up our infrastructure. It was… well, let’s just say it was painful.
Imagine trying to set up 20 identical servers by hand. You’d probably make a mistake somewhere, and good luck remembering exactly what you did six months later when you need to update them! With IaC, you write a configuration file that describes what you want, and the tools make it happen—consistently, repeatedly, and predictably.
Why Terraform? (And Why Should You Care?)
There are several IaC tools out there—AWS CloudFormation, Azure Resource Manager templates, Google Cloud Deployment Manager, Ansible, Chef, Puppet… the list goes on. So why am I teaching you Terraform specifically?
First off, Terraform is cloud-agnostic. While CloudFormation only works with AWS and ARM templates only work with Azure, Terraform works across pretty much all cloud providers. I’ve personally used it to manage resources in AWS, Azure, GCP, and even on-premises VMware environments—all from the same tool and with similar syntax. That’s powerful stuff!
Second, Terraform has a declarative approach. You specify what you want your infrastructure to look like, and Terraform figures out how to make it happen. You don’t need to worry about the order of operations or writing complex procedural code. Just say “I want two web servers and a load balancer” and let Terraform handle the rest.
Finally, Terraform has a massive community and ecosystem. The Terraform Registry has thousands of providers and modules you can leverage. It’s like having access to infrastructure Lego blocks that other people have already perfected.
Getting Terraform Installed (Let’s Actually Do Something!)
Enough theory—let’s get our hands dirty! First, we need to install Terraform on your machine.
For Mac users:
# Using Homebrew brew tap hashicorp/tap brew install hashicorp/tap/terraform</p> <p># Verify installation terraform version </p>
For Windows users:
Download the appropriate package from the Terraform downloads page, unzip it, and add it to your PATH environment variable. Or better yet, use Chocolatey:
choco install terraform
For Linux users:
# Add HashiCorp GPG key curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add –</p> <p># Add HashiCorp repository sudo apt-add-repository “deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main”</p> <p># Update and install sudo apt-get update && sudo apt-get install terraform</p> <p># Verify installation terraform version </p>
Don’t worry if you hit a snag during installation—I’ve been there! Just drop me a message in the community forum, and I’ll help you troubleshoot. We’re in this together!
Your First Terraform Configuration (This Is Where the Magic Happens)
Now that you’ve got Terraform installed, let’s create our first configuration file. Create a new directory somewhere on your machine, and inside it, create a file named main.tf
. This is where we’ll define our infrastructure.
For this first example, we’ll keep it super simple. Let’s create a configuration that provisions a random pet name. This doesn’t create any real infrastructure, but it’s a great way to see Terraform in action without needing cloud provider credentials.
# This tells Terraform to use the “random” provider terraform { required_providers { random = { source = “hashicorp/random” version = “3.4.3” } } }</p> <p># Configure the random provider provider “random” {}</p> <p># Create a random pet name resource “random_pet” “my_pet” { prefix = “awesome” separator = “-” length = 2 }</p> <p># Output the random pet name output “pet_name” { value = random_pet.my_pet.id } </p>
Let me walk you through what this does:
- We tell Terraform we want to use the “random” provider, which gives us access to resources that generate random values.
- We create a “random_pet” resource, which generates a random name like “awesome-fluffy-tiger”.
- We define an output to display the generated name.
Now, from your terminal or command prompt, navigate to the directory containing your main.tf
file and run:
# Initialize Terraform (downloads providers and sets up the environment) terraform init</p> <p># See what Terraform will do terraform plan</p> <p># Apply the changes terraform apply </p>
When you run terraform apply
, Terraform will ask for confirmation before making any changes. Type “yes” and hit Enter.
Voilà! You should see your random pet name displayed as an output. You’ve just run your first Terraform configuration! It might seem trivial, but you’ve just used the exact same workflow that’s used to provision complex infrastructure in production environments.
Understanding the Terraform Workflow
That little example introduced you to the core Terraform workflow, which consists of three main commands:
terraform init
This command initializes a Terraform working directory. It downloads the providers specified in your configuration (like AWS, Azure, or the random provider we just used) and sets up the backend for storing state. You only need to run this once for each new configuration directory, or when you change your providers or backend configuration.
terraform plan
The plan command shows you what Terraform is going to do before it does it. Think of it as a dry run. It compares the current state of your infrastructure with what’s in your configuration files and determines what needs to be created, modified, or destroyed. This is a safety mechanism that lets you verify changes before applying them.
terraform apply
This is where the rubber meets the road. The apply command executes the changes required to reach the desired state specified in your configuration files. It will show you the plan again and ask for confirmation before proceeding.
There’s a fourth command worth mentioning as well:
terraform destroy
This command is the opposite of apply—it destroys all resources managed by your Terraform configuration. This is incredibly useful for temporary infrastructure or test environments. Just be careful with it in production!
main.tf
file and change the prefix from “awesome” to something else, like your name or favorite word. Then run terraform plan
followed by terraform apply
to see how Terraform handles changes to existing resources. What happened, and why? (Hint: Terraform detected that a property of an existing resource changed and updated it accordingly.)
Real-world Example: Creating an AWS S3 Bucket
Random pet names are fun, but let’s look at something more realistic. If you have an AWS account, let’s create an S3 bucket using Terraform. If you don’t have AWS, don’t worry—just read along to understand the concepts, and we’ll look at other providers in later days.
Create a new directory and a new main.tf
file with the following content:
terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 4.0” } } }</p> <p># Configure the AWS Provider provider “aws” { region = “us-west-2” # Change this to your preferred region }</p> <p># Create an S3 bucket resource “aws_s3_bucket” “my_bucket” { bucket = “my-unique-terraform-bucket-name-2023” # Must be globally unique!</p> <p> tags = { Name = “My Terraform Bucket” Environment = “Dev” ManagedBy = “Terraform” } }</p> <p># Output the bucket name output “bucket_name” { value = aws_s3_bucket.my_bucket.bucket }</p> <p># Output the bucket ARN output “bucket_arn” { value = aws_s3_bucket.my_bucket.arn } </p>
Before you run this, you’ll need to authenticate with AWS. The easiest way is to install the AWS CLI and run aws configure
, which will prompt you for your AWS access key and secret key. Terraform will automatically use these credentials.
Once you’ve set up authentication, follow the usual workflow:
terraform init terraform plan terraform apply
After approving the apply, Terraform will create an S3 bucket for you! You can verify it exists by checking the AWS console or using the AWS CLI.
When you’re done experimenting, don’t forget to clean up:
terraform destroy
Understanding Terraform State
You might have noticed a new file in your directory after running Terraform: terraform.tfstate
. This file is crucial—it’s how Terraform keeps track of what it has created and the current state of your infrastructure.
The state file maps the resources in your configuration to real-world infrastructure. When you run terraform plan
or terraform apply
, Terraform compares your configuration to the state file to determine what changes need to be made.
In a team environment, you’d typically store this state file remotely (e.g., in an S3 bucket or Terraform Cloud) so that multiple people can collaborate on the same infrastructure. We’ll cover that in a later session, but for now, just know that this file is important and should be treated with care—it can contain sensitive information!
terraform state list
and terraform state rm
.
Day 1 Wrap-up
Whew! That was a lot for Day 1, but you’ve already accomplished so much:
- You understand what Infrastructure as Code is and why it’s valuable
- You’ve installed Terraform and learned the basic workflow (init, plan, apply, destroy)
- You’ve created your first Terraform configuration and seen it in action
- If you have AWS, you’ve even created a real cloud resource using Terraform
- You’ve learned about Terraform state and why it’s important
I’m genuinely impressed by how much ground we’ve covered. When I first started with Terraform, it took me weeks to get comfortable with these concepts, and you’ve tackled them all in one day!
Tomorrow, we’ll dive deeper into Terraform’s language features. We’ll learn about variables, outputs, data sources, and how to structure your Terraform code for better maintainability. We’ll also start building something more complex that resembles a real-world application infrastructure.
Before you go, take a moment to try the practice exercise if you haven’t already, and feel free to experiment with the configurations we’ve written. The best way to learn Terraform is to use it!
See you tomorrow for Day 2!
Knowledge Check
What is the primary purpose of Infrastructure as Code?
- To make infrastructure cheaper
- To define and provision infrastructure through code instead of manual processes
- To replace cloud computing
- To eliminate the need for system administrators
Knowledge Check
Which command shows you what changes Terraform will make before applying them?
- terraform show
- terraform plan
- terraform validate
- terraform preview
Knowledge Check
What is the purpose of the terraform.tfstate file?
- To store your Terraform code
- To record the current state of your infrastructure
- To define provider configurations
- To encrypt sensitive data
Day 2: Terraform Language Features and Project Structure
Welcome back, infrastructure enthusiast! How’d you sleep? Dreaming of perfectly automated cloud resources, I hope! Yesterday we dipped our toes into the Terraform ocean, and today we’re going to swim a bit deeper. We’ll explore the language features that make Terraform configurations flexible, reusable, and maintainable.
I’m particularly excited about today’s session because this is where Terraform really started to click for me. When I first started using Terraform, I was just copying and pasting examples from the docs. But once I understood variables, locals, and output values, I started building configurations that were truly my own. That “aha” moment is what I want to help you experience today.
Making Your Configurations Flexible with Variables
Hard-coding values in your Terraform configuration is fine for quick experiments, but it’s not great for real-world use. What if you want to deploy the same infrastructure in different environments, or with slight variations? That’s where variables come in.
Think of Terraform variables like function parameters in programming—they let you pass in values from outside your configuration, making your code more flexible and reusable.
Let’s refactor yesterday’s AWS S3 bucket example to use variables. Create a new directory with three files:
variables.tf
variable “aws_region” { description = “The AWS region to deploy resources in” type = string default = “us-west-2” }</p> <p>variable “bucket_name” { description = “Name of the S3 bucket (must be globally unique)” type = string }</p> <p>variable “environment” { description = “Deployment environment (e.g., dev, staging, prod)” type = string default = “dev” } </p>
main.tf
terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 4.0” } } }</p> <p>provider “aws” { region = var.aws_region }</p> <p>resource “aws_s3_bucket” “my_bucket” { bucket = var.bucket_name</p> <p> tags = { Name = “My Terraform Bucket” Environment = var.environment ManagedBy = “Terraform” } } </p>
outputs.tf
output “bucket_name” { description = “Name of the created S3 bucket” value = aws_s3_bucket.my_bucket.bucket }</p> <p>output “bucket_arn” { description = “ARN of the created S3 bucket” value = aws_s3_bucket.my_bucket.arn } </p>
Notice how we’ve split our configuration into multiple files. This is a common practice to keep things organized, especially as your configurations grow larger. Terraform automatically loads all .tf
files in the current directory.
.tf
files in a directory as a single configuration. The convention of putting variables in variables.tf
and outputs in outputs.tf
is just for human organization and readability.
Now, when you run terraform apply
, you’ll be prompted to enter a value for bucket_name
because it doesn’t have a default. You can also provide variable values in several other ways:
Using a .tfvars file
Create a file named terraform.tfvars
:
bucket_name = “my-super-unique-terraform-bucket-2023” environment = “staging”
Terraform automatically loads this file if it’s named terraform.tfvars
or ends with .auto.tfvars
.
Using command-line flags
terraform apply -var=”bucket_name=my-cli-specified-bucket” -var=”environment=prod”
Using environment variables
export TF_VAR_bucket_name=”my-env-var-bucket” export TF_VAR_environment=”test” terraform apply
Local Values: Your Configuration’s Internal Variables
While input variables are great for values that come from outside your configuration, local values (or “locals”) are useful for values computed within your configuration that you want to reference in multiple places.
Let’s extend our example:
# Add this to main.tf</p> <p>locals { # Combine environment and a timestamp for unique naming timestamp = formatdate(“YYYYMMDDhhmmss”, timestamp())</p> <p> # Common tags to be assigned to all resources common_tags = { Environment = var.environment ManagedBy = “Terraform” Owner = “YourTeam” Project = “InfrastructureDemo” } }</p> <p>resource “aws_s3_bucket” “my_bucket” { bucket = var.bucket_name</p> <p> tags = merge(local.common_tags, { Name = “My Terraform Bucket” }) } </p>
Now we’ve created local values for a timestamp and common tags. The timestamp()
function returns the current date and time, and formatdate()
formats it how we want. The merge()
function combines our common tags with the resource-specific “Name” tag.
Locals are great for DRY (Don’t Repeat Yourself) configurations. I use them all the time for things like constructing resource names based on naming conventions, defining common tags or properties, or calculating values that are used in multiple places.
Data Sources: Reading Existing Infrastructure
So far, we’ve focused on creating new infrastructure. But what if you want to reference existing resources that weren’t created by Terraform? That’s where data sources come in.
Data sources let you fetch information about existing resources and use that information in your configuration. They’re like read-only queries for your infrastructure.
Let’s say you want to reference an existing VPC in your AWS account:
# Add this to main.tf</p> <p>data “aws_vpc” “default” { default = true }</p> <p>output “default_vpc_id” { value = data.aws_vpc.default.id }</p> <p>output “default_vpc_cidr” { value = data.aws_vpc.default.cidr_block } </p>
This will query AWS for your default VPC and output its ID and CIDR block. You can then use these values elsewhere in your configuration with data.aws_vpc.default.id
and data.aws_vpc.default.cidr_block
.
data
block when I should have used a resource
block. The error messages weren’t very helpful, and I felt like a complete idiot when I finally spotted the issue.
Structuring a Real-World Project
Now that we understand variables, locals, and data sources, let’s put it all together and create a more realistic project structure. We’ll build a simple web application infrastructure with an S3 bucket for static content and a CloudFront distribution to serve it.
Here’s how we’ll organize our files:
my-terraform-project/ ├── main.tf # Main resources ├── variables.tf # Input variable declarations ├── outputs.tf # Output declarations ├── locals.tf # Local values ├── providers.tf # Provider configuration └── terraform.tfvars # Variable values (gitignored for sensitive values)
Let’s create each file:
providers.tf
terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 4.0” } }</p> <p> # In a real project, you might configure a backend here # backend “s3” { # bucket = “my-terraform-state-bucket” # key = “my-project/terraform.tfstate” # region = “us-west-2” # } }</p> <p>provider “aws” { region = var.aws_region } </p>
variables.tf
variable “aws_region” { description = “The AWS region to deploy resources in” type = string default = “us-west-2” } variable “project_name” { description = “Name of the project, used for resource naming” type = string } variable “environment” { description = “Deployment environment (e.g., dev, staging, prod)” type = string default = “dev” validation { condition = contains([“dev”, “staging”, “prod”], var.environment) error_message = “Environment must be one of: dev, staging, prod” } }
locals.tf
locals { # Resource naming using a consistent convention name_prefix = “${var.project_name}-${var.environment}”</p> <p> # S3 bucket for static website content website_bucket_name = “${local.name_prefix}-website”</p> <p> # Common tags for all resources common_tags = { Project = var.project_name Environment = var.environment ManagedBy = “Terraform” } } </p>
main.tf
# S3 bucket for static website content resource “aws_s3_bucket” “website” { bucket = local.website_bucket_name tags = local.common_tags } # Bucket configuration for website hosting resource “aws_s3_bucket_website_configuration” “website” { bucket = aws_s3_bucket.website.id index_document { suffix = “index.html” } error_document { key = “error.html” } } # Bucket policy to allow public access for website resource “aws_s3_bucket_policy” “website” { bucket = aws_s3_bucket.website.id policy = jsonencode({ Version = “2012-10-17” Statement = [ { Effect = “Allow” Principal = “*” Action = “s3:GetObject” Resource = “${aws_s3_bucket.website.arn}/*” } ] }) } # CloudFront distribution for the website resource “aws_cloudfront_distribution” “website” { enabled = true default_root_object = “index.html” origin { domain_name = aws_s3_bucket_website_configuration.website.website_endpoint origin_id = local.website_bucket_name custom_origin_config { http_port = 80 https_port = 443 origin_protocol_policy = “http-only” origin_ssl_protocols = [“TLSv1.2”] } } default_cache_behavior { allowed_methods = [“GET”, “HEAD”] cached_methods = [“GET”, “HEAD”] target_origin_id = local.website_bucket_name viewer_protocol_policy = “redirect-to-https” forwarded_values { query_string = false cookies { forward = “none” } } } restrictions { geo_restriction { restriction_type = “none” } } viewer_certificate { cloudfront_default_certificate = true } tags = local.common_tags }
outputs.tf
output “website_bucket_name” { description = “Name of the website bucket” value = aws_s3_bucket.website.bucket }</p> <p>output “website_endpoint” { description = “S3 website endpoint” value = aws_s3_bucket_website_configuration.website.website_endpoint }</p> <p>output “cloudfront_distribution_id” { description = “ID of the CloudFront distribution” value = aws_cloudfront_distribution.website.id }</p> <p>output “cloudfront_domain_name” { description = “Domain name of the CloudFront distribution” value = aws_cloudfront_distribution.website.domain_name } </p>
terraform.tfvars
project_name = “my-awesome-website” environment = “dev”
This is a more complex example, but it demonstrates a realistic Terraform project structure. We’re creating an S3 bucket configured for static website hosting, with a bucket policy that allows public access to the objects, and a CloudFront distribution in front of it for better performance and HTTPS support.
I’ll admit—I struggled for hours the first time I set up CloudFront with S3. There are so many configuration options and it’s easy to miss something. Don’t be discouraged if it takes you a few tries to get it right! What matters is that you’re learning and making progress.
terraform init
, terraform plan
, and terraform apply
. Then upload a simple index.html file to your S3 bucket and access your website through the CloudFront domain name. When you’re done experimenting, don’t forget to run terraform destroy
to avoid unnecessary AWS charges. What challenges did you encounter during deployment? Could you modify the configuration to use your own domain name instead of the CloudFront domain?
Handling Dependencies in Terraform
One thing you might have noticed in our configuration is that we didn’t explicitly tell Terraform the order in which to create resources. That’s because Terraform automatically figures out dependencies based on references between resources.
For example, when we referenced aws_s3_bucket.website.id
in the aws_s3_bucket_website_configuration
resource, Terraform understood that the bucket needs to exist before we can configure it for website hosting. This implicit dependency management is one of Terraform’s most powerful features.
Sometimes, though, dependencies aren’t obvious from the configuration. In those cases, you can use the depends_on
attribute to explicitly declare dependencies:
resource “aws_instance” “app_server” { ami = “ami-0c55b159cbfafe1f0” instance_type = “t2.micro”</p> <p> depends_on = [ aws_s3_bucket.logs ] } </p>
This tells Terraform that the EC2 instance depends on the S3 bucket, even if there’s no reference in the configuration that would imply that.
Day 2 Wrap-up
Another day, another set of Terraform superpowers unlocked! Today we’ve covered:
- Using variables to make your configurations flexible and reusable
- Working with local values for internal calculations and DRY configurations
- Fetching information about existing infrastructure with data sources
- Structuring a real-world Terraform project
- Understanding how Terraform manages dependencies between resources
I can’t stress enough how important these concepts are. When I look back at my Terraform journey, understanding variables and project structure was the turning point that took me from stumbling around with copy-pasted configurations to confidently building my own infrastructure from scratch.
Tomorrow we’ll dive into one of Terraform’s most powerful features: modules. We’ll learn how to create reusable, shareable infrastructure components that can be composed like building blocks. We’ll also talk about state management and collaboration in team environments.
Until then, experiment with the configurations we’ve created today. Try modifying them, breaking them (intentionally or unintentionally—I’ve done both!), and fixing them. The hands-on experience is invaluable.
See you tomorrow for Day 3!
Knowledge Check
Which Terraform feature allows you to pass values from outside your configuration to make it more flexible?
- Resources
- Providers
- Variables
- Outputs
Knowledge Check
What is the purpose of local values (locals) in Terraform?
- To expose information to other modules
- To receive input from users
- To define values used multiple times within a configuration
- To connect to local resources
Knowledge Check
When organizing Terraform files, why do many users create separate files for variables, outputs, and resources?
- It's required by Terraform
- It improves performance
- For better organization and readability
- To isolate errors
Congratulations!
You've completed the entire Terraform Mastery: Build and Deploy Infrastructure as Code module.
Keep practicing these skills to master them completely.
Congratulations!
You've successfully completed the entire module. Keep up the great work!