• Post author:
  • Post category:Terraform
  • Reading time:28 mins read

1. Introduction

Sensitive data? Yes, you heard it right. We’re talking about the data that you don’t want to expose to the public. This might be your –

  1. Database Credentials
  2. SSH keys
  3. API tokens –

Anything that could cause a significant security risk if it fell into the wrong hands. Handling this data properly is an essential part of our work in a Terraform environment.

2. Securing Sensitive Data with Terraform

Handling sensitive data properly can be tricky, but don’t worry, Terraform has got you covered! Here are some of the best practices that you should consider:

2.1 Securely managing Terraform variables

This is your first line of defense. Avoid hardcoding sensitive data into your configuration files. Terraform variables should be your go-to choice for handling such data.

1# Set sensitive flag=true
2
3variable "password" {
4   description = "Database password"
5   type        = string
6   sensitive   = true
7} 

terraform

2.2 Encrypting Sensitive Data at Rest

One of the most common places to store sensitive data at rest is in an AWS S3 bucket.

But how do we ensure that our data is encrypted in S3 Bucket?

AWS offers a solution called Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).

When you enable SSE-S3, each object in your bucket is encrypted with a unique key. As an additional safeguard, this key itself is encrypted with a master key that Amazon regularly rotates.

Here is an Example Terraform code which you could refer:

 1# Enable encryption SSE-S3 for S3 Bucket
 2
 3resource "aws_s3_bucket" "bucket" {
 4  bucket = "my_tfstate_bucket"
 5  acl    = "private"
 6
 7  server_side_encryption_configuration {
 8    rule {
 9      apply_server_side_encryption_by_default {
10        sse_algorithm = "AES256"
11      }
12    }
13  }
14}

terraform

In the above example, we’re creating a private S3 bucket and enabling server-side encryption using the AES256 algorithm.

2.3 Encrypting Sensitive Data in Transit

When it comes to encrypting data in transitTransport Layer Security (TLS) is one of the most commonly used protocols.

AWS offers an option to enforce the use of HTTPS (which incorporates TLS) in the communication with your S3 bucket.

In order to enforce HTTPS for an S3 bucket, we can use a bucket policy that denies all non-HTTPS requests:

 1
 2# Deny all non-https request
 3
 4resource "aws_s3_bucket" "bucket" {
 5  bucket = "my_tfstate_bucket"
 6  acl    = "private"
 7}
 8
 9resource "aws_s3_bucket_policy" "bucket_policy" {
10  bucket = aws_s3_bucket.bucket.id
11  
12  # Deny non-https request policy
13  policy = jsonencode({
14    Version = "2012-10-17"
15    Statement = [
16      {
17        Sid       = "ForceSSLOnlyAccess"
18        Effect    = "Deny"
19        Principal = "*"
20        Action    = "s3:*"
21        Resource  = "arn:aws:s3:::my_tfstate_bucket/*"
22        Condition = {
23          Bool = {
24            "aws:SecureTransport" = "false"
25          }
26        }
27      },
28    ]
29  })
30}

terraform

2.4 The use of sensitive argument in Terraform to prevent logging of sensitive data

Now, imagine you’re a detective (Sherlock Holmes, perhaps?), and your logs are like your case notes. They tell you the story of what happened in your system.

But there are certain details – the secret stuff – that you don’t want to be written down in your notes for everyone to see. That’s exactly where the sensitive argument in Terraform comes to the rescue!

Let’s see how it works through a practical example:

Consider a scenario where you are creating a new AWS RDS instance using Terraform. One of the necessary input variables would be your database password. We certainly wouldn’t want this information being printed in our console output or logs.

 1# make the db_password sensitive
 2
 3variable "db_password" {
 4  description = "The password for our database."
 5  type        = string
 6  sensitive   = true
 7}
 8
 9resource "aws_db_instance" "my_db" {
10  allocated_storage    = 20
11  storage_type         = "gp2"
12  engine               = "mysql"
13  engine_version       = "5.7"
14  instance_class       = "db.t2.micro"
15  name                 = "my_db"
16  username             = "admin"
17  password             = var.db_password
18  parameter_group_name = "default.mysql5.7"
19  skip_final_snapshot  = true
20}

terraform

In this example, we’ve set the sensitive attribute of the db_password variable to true. This ensures that Terraform will redact this value from the logs and console outputs, keeping our secret safe and sound.

However, that’s not the end of the story. It’s also crucial to understand that Terraform can’t completely prevent sensitive data from appearing in logs or command outputs, especially if you are explicitly outputting it.

So, it’s essential to use the sensitive flag for output variables as well, like so:

1# set sensitive = true for output variable
2
3output "db_password" {
4   value     = aws_db_instance.my_db.password
5   sensitive = true
6} 

terraform

This prevents the password from being exposed in the terraform apply output or when you run terraform output.

So, remember, my friends, the sensitive argument is a really handy tool in our Terraform toolkit. But like any tool, it’s up to us to use it wisely.

2.5 Remote state storage and encryption

Securing your Terraform state files is like putting your gold in a vault (no pun intended) – it’s a must!

Terraform keeps track of your infrastructure’s state in what it calls, well, the state file. By default, this file is stored locally, which is not ideal, especially when you’re working in a team.

A more secure and collaborative approach is to store the state file remotely. For this, we can use a Terraform backend.

In AWS land, a popular choice for a remote backend is the S3 bucket. Here’s an example of how to configure S3 as your remote backend:

 1# Remote S3 Bucket to store state file
 2
 3terraform {
 4   backend "s3" {
 5      # S3 bucket hosted on AWS
 6      bucket = "my-terraform-state"
 7      # Directory inside S3 Bucket
 8      key    = "path/to/my/key"
 9      region = "us-west-2"
10   }
11} 

terraform

In this code snippet, my-terraform-state is the name of the S3 bucketpath/to/my/key is the location of your state file within the bucket, and us-west-2 is the AWS region where your bucket resides.

But, we’re not done yet! Just storing our state file remotely isn’t enough. We should also encrypt it at rest. Thankfully, AWS makes this pretty easy for us with S3 bucket encryption:

1# Set he encrypt flag true
2terraform {
3  backend "s3" {
4    bucket  = "my-terraform-state"
5    key     = "path/to/my/key"
6    region  = "us-west-2"
7    encrypt = true
8  }
9} 

terraform

See that encrypt = true at the end? That’s all you need to ensure your state file is encrypted at rest in your S3 bucket.

Also, when working in a team, you would want to prevent any conflicts in the state file. This is where state locking comes in. If you enable state locking, only one person can modify the state at a time.

In AWS, we can use DynamoDB for state locking:

 1# State locking using dynamo db
 2terraform {
 3  backend "s3" {
 4    bucket         = "my-terraform-state"
 5    key            = "path/to/my/key"
 6    region         = "us-west-2"
 7    encrypt        = true
 8    dynamodb_table = "my-lock-table"
 9  }
10} 

terraform

In this case, my-lock-table is the name of the DynamoDB table that will keep track of the state lock.

So, there you have it, my friends! By storing your state file remotely, encrypting it, and enabling state locking, you’re putting that gold safely in a vault.

Learn more about – Terraform state locking using DynamoDB(LockID) and S3 Bucket

https://youtube.com/watch?v=MxdvSgoWK7E%3Fcontrols%3D1%26rel%3D0

3. Practical Examples of Securing Sensitive Data

3.1 Using environment variables for sensitive data

Using environment variables is like having a secret handshake – you don’t write it down, but you and your pals know it by heart.

Similarly, we can use environment variables to handle sensitive data in Terraform, preventing us from exposing the data directly in our Terraform files.

Terraform can automatically fetch the values for your variables if you’ve stored them as environment variables. You just need to prefix the variable name with ‘TF_VAR_’. Here’s how to set this up:

1export TF_VAR_db_password="MyS3cretP@ssword" 

terraform

In this case, db_password is our variable name in Terraform, and we are setting the value as MyS3cretP@ssword.

Let’s assume we have a Terraform configuration to create an AWS RDS instance:

 1# using environment variable to get the password
 2
 3variable "db_password" {
 4  description = "Password for the database"
 5  type        = string
 6  sensitive   = true
 7}
 8
 9resource "aws_db_instance" "default" {
10  allocated_storage    = 10
11  engine               = "mysql"
12  engine_version       = "5.7"
13  instance_class       = "db.t2.micro"
14  name                 = "my_db"
15  username             = "admin"
16  password             = var.db_password
17  parameter_group_name = "default.mysql5.7"
18  skip_final_snapshot  = true
19}
20 

terraform

Here, we are using the db_password variable as the password for our RDS instance. When we run terraform apply, Terraform will automatically fetch the password from our environment variable TF_VAR_db_password.

Remember, storing your environment variables securely is also crucial. You might want to consider using a tool like direnv or storing them securely in your CI/CD environment.

And that’s how you manage your secrets with environment variables in Terraform! No paper trails left behind, and your secret handshake remains a secret.

3.2 Securing sensitive data with AWS Key Management Service (KMS)

Let’s step into the world of AWS Key Management Service (KMS)!

Picture this – you have a secret message, and you need to ensure that only certain people can read it. What do you do? Well, in the olden days, you might have used a cipher or a secret code. In the AWS universe, we have AWS Key Management Service (KMS)!

KMS is a managed service that allows you to create and control cryptographic keys, which you can use to encrypt and decrypt data. It’s integrated with other AWS services making it easier to encrypt data you store in these services and control access to the keys that decrypt it.

Let’s walk through an example where we are creating an S3 bucket and encrypting its content with a KMS key:

 1# Create S3 bucekt but encrypt its content using KMS key
 2
 3resource "aws_kms_key" "my_key" {
 4  description             = "This is my KMS key for encrypting an S3 bucket"
 5  deletion_window_in_days = 10
 6}
 7
 8resource "aws_s3_bucket" "bucket" {
 9  bucket = "my_bucket"
10  acl    = "private"
11
12  server_side_encryption_configuration {
13    rule {
14      apply_server_side_encryption_by_default {
15        sse_algorithm     = "aws:kms"
16        kms_master_key_id = aws_kms_key.my_key.arn
17      }
18    }
19  }
20}
21 

terraform

In this example, we first create a KMS key with the “aws_kms_key” resource.

Then, when we create our S3 bucket, we specify a server_side_encryption_configuration block, where we choose aws:kms as our sse_algorithm and provide the ARN of our KMS key as kms_master_key_id.

By doing this, any object that’s uploaded to our bucket will automatically be encrypted using the KMS key we created. And only those with the necessary permissions to use this KMS key will be able to decrypt the data.

Do note that KMS keys have a cost associated with them, and you need to manage permissions for the key separately. So, be careful and vigilant about who can access what!

And that’s it, my friends! With AWS KMS, your sensitive data gets an extra layer of security.

3.3 Implementing Vault for secrets management

Think of HashiCorp Vault like your very own digital Swiss bank. It’s a centralized service for securely storing and accessing secrets, whether they’re API keyspasswords, or certificates.

Vault can be particularly useful when working with Terraform. You can use it to dynamically generate secrets, so you’re not hard-coding them into your Terraform scripts. Plus, it has detailed audit logs so you can track who’s accessing what.

Here’s an example of how to use Vault with Terraform:

First, you need to configure the Vault provider:

1# Hashicorp vault provider
2
3provider "vault" {
4  # it is recommended to use environment variables for VAULT_ADDR and VAULT_TOKEN
5  # address = "http://localhost:8200"
6  # token   = "s.myVaultToken123456"
7} 

terraform

For the Vault address and token, it’s a best practice to use environment variables (VAULT_ADDR and VAULT_TOKEN respectively). This way, you’re not hardcoding these potentially sensitive details into your scripts.

Now, let’s assume you have a secret stored in Vault at path secret/my_secrets/db_password. Here’s how you would retrieve it with Terraform:

 1# set the password
 2
 3data "vault_generic_secret" "db_password" {
 4  path = "secret/my_secrets/db_password"
 5}
 6
 7output "db_password" {
 8  value     = data.vault_generic_secret.db_password.data["value"]
 9  sensitive = true
10} 

terraform

In this example, the vault_generic_secret data source is used to fetch the secret from Vault. The value of the secret can then be accessed with data.vault_generic_secret.db_password.data[“value”].

We’re also using the sensitive attribute for our output to ensure that the password won’t be shown in the terraform apply output.

But remember, the Swiss bank isn’t invincible, and neither is Vault. You’ll need to secure your Vault instance – think about encryption, access policies, and regular auditing. Use it wisely, and it can be a powerful tool in your Terraform toolkit.

Here are more concrete lab sessions and playlist on how to use Hashicorp Vault

  1. HashiCorp Vault Installation – Part 1
  2. HashiCorp Vault Start and Stop in Development mode – Part 2
  3. HashiCorp Vault Read Write and Delete secrets – Part 3
  4. HashiCorp Vault Secret Engine and Secret Engine path – Part 4
  5. HashiCorp Vault Dynamic Secrets generation – Part 5
  6. HashiCorp Vault Token Authentication & GitHub Authentication – Part 6 | HashiCorp Vault tutorials
  7. HashiCorp Vault Policy – Part 7 | HashiCorp Vault tutorials
  8. HashiCorp Vault Deploy Vault, HTTP API & UI – Part 8

4. Regular Auditing and Rotating Secrets

4.1 The importance of auditing

Let’s step into the shoes of our inner security guard and explore the world of auditing and secret rotation.

Imagine you’re a detective on a TV show, and you’ve got a case full of evidence. To solve the mystery, you need to look at each piece of evidence carefully, figure out what it tells you. In the world of cloud infrastructure, this is akin to auditing.

Regular auditing is like regularly checking that evidence room, making sure nothing’s out of place, and everything’s accounted for. You’re reviewing logslooking for any suspicious activity, and verifying who accessed what and when.

In Terraform, you might want to audit your state file – The state file is like a treasure map – it tells you what resources you have and their configurations. It can contain sensitive information, so it’s crucial to know who’s accessing it.

For example, if you’re using S3 as your remote backend, you can enable access logging and use CloudTrail to track access to your state file. This will give you valuable insights, like who accessed the state file, when they accessed it, and what actions they performed.

4.2 Regular rotation of secrets

Just like a master of disguise changes their appearance regularly to avoid detection, we need to rotate our secrets regularly. This could be your database passwords, API keys, or tokens – anything that provides access to your resources.

Rotating secrets reduces the risk of them being misused. If a secret is exposed but you rotate it soon after, the exposed secret becomes useless.

Terraform doesn’t directly help with secret rotation, but Vault, a tool we discussed earlier, shines here. Vault has a feature called dynamic secrets. It can generate secrets on-the-fly and automatically revoke them after a certain period. You can use this feature to regularly rotate your secrets.

5. Conclusion

First, we embarked on the noble path of “Understanding sensitive data in Terraform”, where we grasped why it’s critical to protect sensitive data and the risks if we don’t. Our quest then led us to decipher the enigma of “Encrypting sensitive data in transit and at rest”, unveiling the twin shields of SSL/TLS and AWS Key Management Service (KMS).

Next, our journey took a slightly esoteric turn as we delved into the cosmos of Terraform to discover the magic of “sensitive” argument, an incantation that prevents our secrets from appearing in Terraform’s output.

Our path then led us to the heart of the Terraform kingdom – the “State File”. We learnt how to secure it by using remote backends and encryption, storing it far away from prying eyes, and protecting our precious map to the treasure.

We then navigated the labyrinth of environment variables, turning them into our allies to handle sensitive data securely and elegantly.

Barely pausing for breath, we ventured into the world of AWS Key Management Service (KMS), learning to wield it as a sword that encrypts our data with unbreakable cryptographic keys.

Our next destination was the HashiCorp Vault, an impregnable fortress that safely stores our secrets, only releasing them to those deemed worthy.

Our journey was nearing its end, but not before we unravelled the mystery of encrypting Terraform state files. We learnt how to make our treasure map indecipherable to anyone but us.

Finally, we took on the role of vigilant watchmen, keeping a sharp eye on our infrastructure through regular auditing and changing our secret codes frequently by rotating secrets.

Leave a Reply