Static Website Implementation
After setting up our remote Terraform state, we are now ready to implement the static website infrastructure. Once we are done, you now have a website hosted on AWS with an SSL certificate with redirection from non-www record to www record.
S3 buckets policy
s3-policy.json
First of all, we start with the templates folder and the S3 buckets policy for both the non-www S3 bucket and the www S3 bucket. This policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${bucket}/*"
}
]
}
Providers file
provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = <fill-in-your-bucket>
key = "static-website/terraform.tfstate"
region = "eu-central-1"
encrypt = true
dynamodb_table = <fill-in-your-dynamodb-table>
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Management = "Terraform"
}
}
}
provider "aws" {
alias = "acm_provider"
region = "us-east-1"
}
The provider.tf
leverages the previously created remote Terraform state infrastructure. Notice the difference in the key field which creates the static website Terraform state file in a different path than the one used before. This ensures that the static website Terraform state file lives within the same bucket but in a different path.
Variables file
variables.tf
variable "aws_region" {
type = string
description = "The AWS region you wish to deploy your resources to."
default = "eu-central-1"
}
variable "domain_name" {
type = string
description = "The domain name for the website."
default = <fill-in-your-non-www-domain>
}
variable "bucket_name" {
type = string
description = "The name of the bucket without the www. prefix. Normally domain_name."
default = <fill-in-your-non-www-bucket-name>
}
variable "s3_public_access_block" {
type = bool
default = false
description = "Conditional enabling of S3 Public Access Block."
}
The variables.tf
contains the variables we are going to use to provision the S3 buckets and Route 53 public hosted zone and records.
S3 file
s3.tf
resource "aws_s3_bucket" "www_bucket" {
bucket = "www.${var.bucket_name}"
force_destroy = true
}
resource "aws_s3_bucket_website_configuration" "static_website_s3_buckets" {
bucket = aws_s3_bucket.www_bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "404.jpeg"
}
}
resource "aws_s3_bucket_public_access_block" "static_website_s3_buckets_pabs" {
bucket = aws_s3_bucket.www_bucket.id
block_public_acls = var.s3_public_access_block
block_public_policy = var.s3_public_access_block
ignore_public_acls = var.s3_public_access_block
restrict_public_buckets = var.s3_public_access_block
}
resource "aws_s3_bucket_policy" "static_website_s3_buckets_policy" {
bucket = aws_s3_bucket.www_bucket.id
policy = templatefile("templates/s3-policy.json", { bucket = "www.${var.bucket_name}" })
}
resource "aws_s3_bucket_cors_configuration" "static_website_s3_buckets_cors" {
bucket = aws_s3_bucket.www_bucket.id
cors_rule {
allowed_headers = ["Authorization", "Content-Length"]
allowed_methods = ["GET", "POST"]
allowed_origins = ["https://www.${var.domain_name}"]
max_age_seconds = 3000
}
}
# S3 bucket for redirecting non-www to www.
resource "aws_s3_bucket" "root_bucket" {
bucket = var.bucket_name
force_destroy = true
}
resource "aws_s3_bucket_public_access_block" "root_s3_buckets_pabs" {
bucket = aws_s3_bucket.root_bucket.id
block_public_acls = var.s3_public_access_block
block_public_policy = var.s3_public_access_block
ignore_public_acls = var.s3_public_access_block
restrict_public_buckets = var.s3_public_access_block
}
resource "aws_s3_bucket_policy" "root_s3_buckets_policy" {
bucket = aws_s3_bucket.root_bucket.id
policy = templatefile("templates/s3-policy.json", { bucket = var.bucket_name })
}
resource "aws_s3_bucket_website_configuration" "root_s3_buckets" {
bucket = aws_s3_bucket.root_bucket.id
redirect_all_requests_to {
host_name = "www.${var.domain_name}"
protocol = "https"
}
}
The s3.tf
file contains the two S3 buckets (www and non-www). The non-www S3 bucket does not contain any objects but contains the website configuration that redirects all requests to the www S3 bucket.
The www S3 bucket contains all the objects for your website as well as a website configuration that allows this S3 bucket to have static website hosting.
You can also notice that both S3 buckets have no restriction on public access. This can be changed, if required. Please check relevant documentation for that.
We also have the previously mentioned S3 bucket policy attached as well as CORS configuration that sends the content length of the objects in S3 to CloudFront.
ACM file
acm.tf
resource "aws_acm_certificate" "ssl_certificate" {
provider = aws.acm_provider
domain_name = var.domain_name
subject_alternative_names = ["*.${var.domain_name}"]
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_acm_certificate_validation" "cert_validation" {
provider = aws.acm_provider
certificate_arn = aws_acm_certificate.ssl_certificate.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
timeouts {
create = "5m"
}
}
For the acm.tf
file, we are creating a wildcard certificate for both the non-www record and www record with a DNS validation. You can also use EMAIL validation instead. Please check relevant documentation for that.
CloudFront file
cloudfront.tf
# Cloudfront distribution for main s3 site.
resource "aws_cloudfront_distribution" "www_s3_distribution" {
origin {
#domain_name = aws_s3_bucket.www_bucket.website_endpoint
domain_name = aws_s3_bucket_website_configuration.static_website_s3_buckets.website_endpoint
origin_id = "S3-www.${var.bucket_name}"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
aliases = ["www.${var.domain_name}"]
custom_error_response {
error_caching_min_ttl = 0
error_code = 404
response_code = 200
response_page_path = "/404.jpeg"
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-www.${var.bucket_name}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 31536000
default_ttl = 31536000
max_ttl = 31536000
compress = true
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate_validation.cert_validation.certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
}
# Cloudfront S3 for redirect to www.
resource "aws_cloudfront_distribution" "root_s3_distribution" {
origin {
domain_name = aws_s3_bucket_website_configuration.root_s3_buckets.website_endpoint
origin_id = "S3-.${var.bucket_name}"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
aliases = [var.domain_name]
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-.${var.bucket_name}"
forwarded_values {
query_string = true
cookies {
forward = "none"
}
headers = ["Origin"]
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate_validation.cert_validation.certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
}
For the cloudfront.tf
file, we are creating two distributions (for the non-www and www S3 buckets). The non-www distribution has a viewer-policy set to allow-all. This allows all requests to the non-www domain to pass through in order to be handled by the S3 bucket that will then redirect to www domain.
As for the www distribution, the viewer-policy set there is redirect-to-https which means any request to www will be redirected to https using the previously created certificate.
Route 53 file
route53.tf
resource "aws_route53_zone" "main" {
name = var.domain_name
}
resource "aws_route53_record" "root-a" {
zone_id = aws_route53_zone.main.zone_id
name = var.domain_name
type = "A"
alias {
name = aws_cloudfront_distribution.root_s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.root_s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_route53_record" "www-a" {
zone_id = aws_route53_zone.main.zone_id
name = "www.${var.domain_name}"
type = "A"
alias {
name = aws_cloudfront_distribution.www_s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.www_s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_route53_record" "cert_validation" {
for_each = {
for dvo in aws_acm_certificate.ssl_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = aws_route53_zone.main.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
Finally, the route53.tf
file is creating a hosted zone, records for non-www and www, and the validation for both records.
Note that the records for both non-www and www are aliases of their respective CloudFront distributions.
Congratulations
After going through this page, you now have a fully operational static website on AWS. If you face any challenges, please open an issue on the Github repo.