Skip to main content

Provisioning EKS with Terraform

Once you are done with the implementation, you will have a fully-functioning EKS cluster on AWS. The repository also contains a full Terraform pipeline for any changes you want to make.

Data file

data.tf

First of all, we start with the data fetched to supplement the creation of our resources. We fetch information about the AWS availability zones and retrieves the IAM policy for the Amazon EBS CSI driver. The policy is typically used when deploying EBS volumes with an EKS.

data "aws_availability_zones" "available" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}

data "aws_iam_policy" "ebs_csi_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}

EBS (Elastic Block Store) file

ebs.tf

module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "5.44.0"

create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [data.aws_iam_policy.ebs_csi_policy.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}

This module creates an IAM role that allows the EBS CSI controller in an EKS cluster to manage EBS volumes. It does this by linking the role to the OIDC provider of the EKS cluster and attaching the appropriate policy (AmazonEBSCSIDriverPolicy). The service account ebs-csi-controller-sa in the Kubernetes kube-system namespace will be able to assume this IAM role, allowing it to interact with AWS resources like EBS.

EKS (Elastic Kubernetes Service) file

eks.tf

module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=2965d99e1ecca710bbdf8fbccb208d042239e8e2"

cluster_name = local.cluster_name
cluster_version = "1.31"

cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true

access_entries = {
my-iam-user = {
principal_arn = "<add-your-principal-arn>"

policy_associations = {
my-iam-user = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
}
}

cluster_addons = {
aws-ebs-csi-driver = {
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
}
}

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_group_defaults = {
ami_type = "AL2023_x86_64_STANDARD"

}

eks_managed_node_groups = {
one = {
name = "node-group-1"

instance_types = ["t3.small"]

min_size = 1
max_size = 3
desired_size = 2
}

two = {
name = "node-group-2"

instance_types = ["t3.small"]

min_size = 1
max_size = 2
desired_size = 1
}
}
}

This module configures an Amazon EKS cluster running a 1.31 Kubernetes version allowing public access to the cluster API with admin permissions for the cluster creator, a section where you can add your IAM user ARN to access the cluster denoted under add-your-principal-arn, the previously mentioned EBS CSI driver, and two EKS-managed node groups with auto-scaling enabled.

Locals file

locals.tf

locals {
cluster_name = "eks-terraform-${random_string.suffix.result}"
}

The local value here sets the EKS cluster name as eks-terraform- followed by a unique random string.

Provider file

provider.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}

random = {
source = "hashicorp/random"
version = "~> 3.6.1"
}

tls = {
source = "hashicorp/tls"
version = "~> 4.0.5"
}

cloudinit = {
source = "hashicorp/cloudinit"
version = "~> 2.3.4"
}
}

required_version = "~> 1.3"

backend "s3" {
bucket = "<s3-bucket-name>"
key = "eks/terraform.tfstate"
region = "eu-central-1"
encrypt = true
dynamodb_table = "<dynamodb-table-name>"
}
}

provider "aws" {
region = var.aws_region

default_tags {
tags = {
Management = "Terraform"
}
}
}

The provider.tf file specifies the providers used for this Terraform provisioning. It also includes the S3 backend for remote Terraform state. Please fill in your S3 bucket and DynamoDB table.

Random file

random.tf

resource "random_string" "suffix" {
length = 8
special = false
}

The code block here generates a random string which is used in the previously mentioned EKS and Locals to name the EKS cluster.

Variables file

variables.tf

variable "aws_region" {
type = string
description = "The AWS region you wish to deploy your resources to."
default = "eu-central-1"
}

In this variable file, we specify the AWS region where the EKS cluster will be provisioned.

VPC file

vpc.tf

module "vpc" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc.git?ref=e226cc15a7b8f62fd0e108792fea66fa85bcb4b9"

name = "eks-terraform-vpc"

cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
}

We leverage the AWS-verified VPC community module to provision a VPC with public and private subnets, route tables, an internet gateway, as well as a NAT Gateway with an Elastic IP.

Outputs file

outputs.tf

output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane"
value = module.eks.cluster_security_group_id
}

output "region" {
description = "AWS region"
value = var.aws_region
}

output "cluster_name" {
description = "Kubernetes Cluster Name"
value = module.eks.cluster_name
}

Finally, the outputs file prints out information about the EKS cluster as well as the region where this cluster is provisioned.

Congratulations

After going through the code and implementing the Terraform changes, you now have a fully functional EKS cluster on AWS. If you face any challenges, please open an issue on the Github repo.