Every time I start an AWS project, I have the same issue, what Terraform backend to use?

I love the AWS S3 + DynamoDB backend solution but this just covers the basics, you want your Terraform State to be protected from client misbehavior or human error. Yes, I've been on the end of the call where the customer used cloud-nuke against the Terraform S3 bucket and we got a drift for the whole project. (yes I managed to import and recreate .tfstate for 500+ resources of the landing zone + applications accounts)

So every time I start a new project I hope to find some simple module that gives some extra protection as the S3 version enabled, deletion protection policy, object lock, DynamoDB table with point-in-time recovery, kms encryption, and so on.

Today I decided to make yet another attempt at solving this well, hope it is beneficial for others as well and that we can over time keep improving this module.

https://registry.terraform.io/modules/tigpt/remote-state-s3-dynamodb-backend/aws/latest

Let's break it down on how to use it:

module "remote-state-s3-dynamodb-backend" {
  source  = "tigpt/remote-state-s3-dynamodb-backend/aws"
  version = "1.0.1"

  name = "my-terraform-backend"

  tags = {
    terraform = "true"
  }
}

So you call the module with source and pin a version, of course.
Then you name it, this should have some project related name, example business unit or account, I like to name something like landingzone or network-production, even application-dev.

You can as well pass some tags because everyone loves tags and they are an important asset.

Let's break out the module into parts:

├── dynamodb.tf
├── outputs.tf
├── random.tf
├── s3.tf
├── variables.tf
└── versions.tf

Each file should be self-explanatory, but let's dig into them, DynamoDB first.

#############################
#--- DynamoDB State Lock ---#
#############################

module "dynamodb_table" {
  source  = "terraform-aws-modules/dynamodb-table/aws"
  version = "4.0.0"

  name     = "tf-${var.name}-${random_integer.random.id}-locktable"
  hash_key = "LockID"

  attributes = [
    {
      name = "LockID"
      type = "S"
    }
  ]

  deletion_protection_enabled    = true
  point_in_time_recovery_enabled = true

  server_side_encryption_enabled     = true
  server_side_encryption_kms_key_arn = aws_kms_key.dynamodb.arn

  tags = merge(
    var.tags,
    {
      "Name" = format("%s", var.name)
    },
  )
}

resource "aws_kms_key" "dynamodb" {
  description             = "KMS key is used to encrypt bucket objects"
  deletion_window_in_days = 7

  tags = merge(
    var.tags,
    {
      "Name" = format("%s", var.name)
    },
  )
}

I love the Terraform AWS modules so we leverage them as much as possible to make the DynamoDB table, we then add some naming conventions to the input name that comes from the variables.tf as well as a random generated from random.tf to make sure names are unique (especially important for S3).

Then it's time to configure deletion_protection, point_in_time_recovery as well as encritpions with KMS key that we create.

Next the s3.tf

###########################
#--- S3 Backend Bucket ---#
###########################

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "4.1.0"

  bucket = "tf-${var.name}-${random_integer.random.id}-state"
  acl    = "private"

  object_lock_enabled = true
  control_object_ownership          = true
  object_ownership                  = "ObjectWriter"
  attach_deny_incorrect_kms_key_sse = true
  allowed_kms_key_arn               = aws_kms_key.objects.arn

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        kms_master_key_id = aws_kms_key.objects.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }

  force_destroy = true

  versioning = {
    enabled = true
  }
  attach_policy = true
  policy        = <<POLICY
{
  "Statement": [
    {
      "Sid": "bucket-delete-protection",
      "Action": [
        "s3:DeleteBucket"
      ],
      "Effect": "Deny",
      "Resource": "arn:aws:s3:::tf-${var.name}-${random_integer.random.id}-state",
      "Principal": {
        "AWS": [
          "*"
        ]
      }
    }
  ]
}
POLICY

  tags = merge(
    var.tags,
    {
      "Name" = format("%s", var.name)
    },
  )
}

resource "aws_kms_key" "objects" {
  description             = "KMS key is used to encrypt bucket objects"
  deletion_window_in_days = 7

  tags = merge(
    var.tags,
    {
      "Name" = format("%s", var.name)
    },
  )
}

Once again we start with the Terraform AWS modules of S3 and configure the bucket as private as well as activate object_lock, versioning, kms encryption but what I want and don't see a lot in other modules out there is a S3:DeleteBucket a policy that prevents all principals from deleting the bucket.

Well, I could do more, like an S3 bucket replication policy to a bucket on another account, but some people consider this overkill, so I do it specifically only for the projects/clients that want it.

Feel free to fork it and do a pull request for any change you consider necessary, I will keep this going as my default AWS S3 backend and keep evolving it as feel necessary.