Terraform – States, Locks & Team Work

Terraform can create, modify and destroy infrastructure, and with my last blog post I’ve seen that it can run Ansible to configure the infrastructure that it creates. So far, so good but what if multiple people work on the same project with each trying to adjust code at the same time? Eek, probably lots of creating / modifying and destroying with some messed up states.

Don’t panic though, as Terraform kind of has this covered.

The State File

If you have been following my Terraform blogs in sequence you may remember the .tfstate file which was automatically generated and saved into the directory of the project. The .tfstate file records the state of the infrastructure Terraform is managing so that Terraform knows what has been created, what is running and when changes are made where the changes will effect. If two or more team members are working with the same .tfstate file and each trying to write to it then problems will occur.

A solution for this is to place the .tfstate file in an area that team members can access, and enable locking on the state file so that only one process can access/amend the file at a time. This can be achieved using AWS S3 (sharable) and AWS DynamoDB (locking system).

What About A Repository (Git)?

Its not recommend storing the .tfstate file in a repository as it will have the same issues mentioned above.

However, storing .tf files that contain code to instruct Terraform is a good idea if the team all work from one branch.

TF Backend – Event Sequence

Terraform has configuration options for where it stores its backend (e.g. .tfstate files), however if Terraform is creating the infrastructure to host the backend then it must first create the infrastructure and then migrate its backend to the infrastructure. If destroying the infrastructure it has to be done in reverse (migrate backend back to local then destroy infrastructure).

Creating The S3 Bucket / DynamoDB

TF Code to create an S3 bucket and a DynamoDB Table

provider "aws" {
region = "eu-west-2"
}

resource "aws_s3_bucket" "tf_state_geek" {
# Name of bucket (must be unique!)
bucket = "geektechstuff_tf_state_bucket"

# Prevent accidental deletion
lifecycle {
prevent_destroy = true
}

# Turn on versioning to track history
versioning {
enabled = true
}

# Enable encryption
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = " AES256"
}
}
}

}

resource "aws_dynamodb_table" "tf_locks_geek" {
name = "db_for_tf_locks_geek"
billing_mode = "PAY_PER_REQUEST"
hashkey = "LockID"

attribute {
name = "LockID"
type = "S"
}
}

 


With the S3 bucket and DynamoDB table created, Terraform then needs to be told to use them for its tfstate backend by adding the following code:

terraform {
backend "s3" {
# S3 Bucket Details
# must match bucket name
bucket = geektechstuff_tf_state_bucket"
# name to give the Terraform state file
key = "test/tf.state"
# region the bucket is in
region = "eu-west-02"

# DynamoDB Details
dynamodb_table = "db_for_tf_locks_geek"
encrypt = true

}
}

A Terraform initialisation then needs to be run again (terraform init), at which point if you have a local state file (.tfstate) Terraform will ask if it should be copied to the S3 backend location.

Terraform Backend