Terraform state set up
Before we start with the static website implementation, we need to set up a remote Terraform state on S3 to hold our state file.
Terraform state module
We create a module which can be reused for future projects to create our remote Terraform state as follows:
resource "random_string" "tf_remote_state_s3_buckets" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
length = 4
special = false
upper = false
}
resource "aws_s3_bucket" "tf_remote_state_s3_buckets" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
bucket = "${each.value.prefix}-tf-state-${random_string.tf_remote_state_s3_buckets[each.key].result}"
force_destroy = true
}
resource "aws_s3_bucket_versioning" "tf_remote_state_s3_buckets" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
bucket = aws_s3_bucket.tf_remote_state_s3_buckets[each.key].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_public_access_block" "tf_remote_state_s3_buckets_pabs" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
bucket = aws_s3_bucket.tf_remote_state_s3_buckets[each.key].id
block_public_acls = var.s3_public_access_block
block_public_policy = var.s3_public_access_block
ignore_public_acls = var.s3_public_access_block
restrict_public_buckets = var.s3_public_access_block
}
# Terraform State Locking
resource "random_string" "tf_remote_state_lock_tables" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
length = 4
special = false
upper = false
}
resource "aws_dynamodb_table" "tf_remote_state_lock_tables" {
for_each = var.tf_remote_state_resource_configs == null ? {} : var.tf_remote_state_resource_configs
name = "${each.value.prefix}-tf-state-lock-${random_string.tf_remote_state_lock_tables[each.key].result}"
billing_mode = each.value.ddb_billing_mode
hash_key = each.value.ddb_hash_key
attribute {
name = each.value.ddb_hash_key
type = "S"
}
}
Inside the Github repo, you can also check out the data.tf
and variables.tf
to check the module.
Afterwards, we leverage this module to create our remote Terraform state.
Terraform state core
The core leverages the previously mentioned module to provision the Terraform remote state.
Let's start with the main file. This is as follows:
module "aws-tf" {
source = "../module/"
tf_remote_state_resource_configs = {
tf_state : {
prefix = <fill-in-your-variable>
}
}
}
Fill in the variable with the bucket and dynamodb table name.
The provider file is as follows:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# backend "s3" {
# bucket = <fill-in-your-bucket>
# key = "state/terraform.tfstate"
# region = "eu-central-1"
# encrypt = true
# dynamodb_table = <fill-in-your-dynamodb-table>
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Management = "Terraform"
}
}
}
Notice that the backend block is currently commented out. This is because we want to first provision the infrastructure that will be used as remote state, and then start using it.
Run the following to provision the infrastructure:
terraform init
terraform plan
terraform apply
Once the infrastucture is provisioned, take note of the newly created S3 bucket and DynamoDB table and add these in the previously mentioned backend.tf
file and uncomment the backend block.
Then run the following:
terraform init
And allow the Terraform state to be moved to the newly created S3 bucket. To validate that the Terraform remote state has been configured correctly, navigate to the newly created S3 bucket and locate your state file.