DoiT Cloud Intelligence™
Deploying an Amazon Aurora MySQL Cluster with Terraform

Deploying database solutions on the cloud requires careful consideration of security, scalability, and ease of management. Anything that is misconfigured or set up suboptimally can create major technical headaches, technical debt, and, worst case, significant business impact in the case of an outage or non-performant DB. Amazon Aurora MySQL offers a highly scalable and secure relational database service.
In this blog, we explore how to automate the deployment of an Amazon Aurora MySQL cluster using Terraform, focusing on creating a highly available configuration with multiple reader instances for read scaling.
Prerequisites:
- An AWS account
- Terraform installed on your machine
- Familiarity with AWS services and Terraform
- Terraform code from Github Repo
Structure of the Project: Our Terraform project is structured into modules for clarity and reuse. Key modules include:
- KMS Module: Manages encryption keys for the database.
- Aurora Cluster Module: Handles the deployment and configuration of the Aurora MySQL cluster.
Summary of the Modular Directory Structure
This directory structure is designed to organize and manage the Terraform code efficiently for deploying an Amazon Aurora MySQL cluster with encryption managed by AWS KMS.
Root Directory
Includes main.tf, providers.tf, terraform.tfvars, variables.tf, and outputs.tf.
main.tf: Contains the main configuration where modules are called.providers.tf: Defines provider settings and version requirements.variables.tf: Declares variables used across the configurationsterraform.tfvars: Sets override values for variables defined elsewhere.outputs.tf: Defines outputs from the Terraform configuration.
Modules Directory
Contains subdirectories for each distinct module used in the project.
KMS Module: Handles the creation and management of AWS KMS keys for encrypting data at rest.
main.tf: Contains the resources for creating KMS keys.variables.tf: Defines input variables specific to KMS.outputs.tf: Provides output attributes from the KMS resources.
Aurora Cluster Module: Manages the Aurora MySQL cluster’s deployment.
main.tf: Configures the Aurora MySQL cluster and associated resources.variables.tf: Lists input variables specific to the Aurora MySQL cluster.outputs.tf: Outputs attributes like the database endpoint.
This structure ensures a clean and organized codebase and enhances the reusability and maintainability of the Terraform code, making it easier to manage cloud resources effectively.
1\. Setting Up the KMS Module:
Encryption at rest is critical for protecting data. The Aurora MySQL cluster utilizes AWS KMS for key management, specifically, AWS Customer Managed Key (CMK), where you create, manage, and own the key to encrypt your Aurora cluster. Managing your own KMS key for your Aurora cluster’s encryption offers several advantages to utilizing an AWS-managed KMS key: CMKs allow fine-grained access control policies, including the ability to specify who can use and manage them; each use of a CMK is logged to CloudTrail for auditing purposes — crucial for regulatory compliance, as well as geographical control allowing you to restrict key usage to specific AWS regions. Charges for using CMKs — billed for storage and API usage — are also more predictable and transparent, assisting budgeting and cost management compared to AWS-managed keys.
The KMS module is responsible for creating and managing these keys via the module’s outputs.tf file provides the name of the KMS key that the Aurora module will use.
# KMS Module - variables.tf
variable "rds" {
description = "Enable customer managed KMS key for RDS"
default = true
type = bool
}
# KMS Module - main.tf
resource "aws_kms_key" "rds" {
description = "KMS key for RDS"
enable_key_rotation = true
deletion_window_in_days = 30
}
Explanation:
- Key Rotation: Automated key rotation enhances security by periodically changing the underlying encryption key.
- Deletion Window: Specifies the time frame before a deleted key is irrevocably destroyed, allowing recovery if needed.
2\. Configuring the Aurora Cluster Module:
The Aurora Cluster module is at the heart of our deployment. It sets up the database with specifications for performance and availability. It allows configuring settings from the database engine you’ll deploy to the existing VPC and Security Groups to be utilized with it to options like preventing accidental deletion of the cluster via Terraform unless the prevent_destroy = truelifecycle clause is explicitly removed from the module:
# Aurora Cluster Module - variables.tf
variable "read_replica_count" {
description = "Number of Read Replicas in addition to Writer instance"
type = number
}
# Aurora MySQL Cluster config
resource "aws_rds_cluster" "aurora_mysql_cluster" {
cluster_identifier = var.name
engine = var.engine
engine_mode = var.engine_mode
engine_version = var.aurora_mysql_cluster_engine_version
database_name = var.database_name
master_username = var.aurora_mysql_cluster_master_username
# Create and store password in Secrets Manager
manage_master_user_password = true
final_snapshot_identifier = "${var.name}-snapshot"
skip_final_snapshot = var.skip_final_snapshot
deletion_protection = var.deletion_protection
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.backup_window
preferred_maintenance_window = var.maintenance_window
port = var.port
db_subnet_group_name = aws_db_subnet_group.db_subnet_group.name
vpc_security_group_ids = concat(var.security_group_ids, [aws_security_group.aurora_mysql_sg.id])
apply_immediately = true
iam_database_authentication_enabled = false
copy_tags_to_snapshot = true
storage_encrypted = true
kms_key_id = var.kms_key_id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.cluster_param_group.name
enabled_cloudwatch_logs_exports = var.engine_mode == "serverless" ? [] : var.enabled_cloudwatch_logs_exports
tags = merge(var.tags, { "Name" = var.name })
lifecycle {
ignore_changes = [ engine_version, scaling_configuration, engine_mode ]
prevent_destroy = true
}
}
# Aurora MySQL Instances within Cluster - one Writer and "read_replica_count" Readers
resource "aws_rds_cluster_instance" "aurora_mysql_instance" {
count = var.engine_mode == "serverless" ? 0 : (1 + var.read_replica_count)
identifier = "${var.name}-${count.index}"
cluster_identifier = aws_rds_cluster.aurora_mysql_cluster.id
engine = var.engine
engine_version = var.aurora_mysql_cluster_engine_version
instance_class = var.instance_type
db_subnet_group_name = aws_db_subnet_group.db_subnet_group.name
db_parameter_group_name = aws_db_parameter_group.db_param_group.name
monitoring_role_arn = var.create_monitoring_role && var.monitoring_interval > 0 ? aws_iam_role.rds_enhanced_monitoring[0].arn : null
monitoring_interval = var.create_monitoring_role && var.monitoring_interval > 0 ? var.monitoring_interval : null
auto_minor_version_upgrade = true
performance_insights_enabled = true
tags = var.tags
lifecycle {
ignore_changes = [ engine_version ]
}
}
Important Configuration Parameters (supply these in terraform.tfvars):
- Cluster Identifier: Unique identifier for the Aurora cluster; critical for cluster management.
- Engine Configuration: Specifies the database engine and version, ensuring compatibility and feature availability. It can be Aurora MySQL 5.7.x or MySQL 8.0.x, but it’s highly recommended to use 8.0.x at this time since AWS has deprecated 5.7.x and only offers expended (paid) support.
- Instance Count: Controls the number of reader instances created alongside the writer, facilitating read scaling.
- Security and Networking: Associates the cluster with specific VPC and security group settings, ensuring it operates within a secure and isolated network environment. You will use the subnet IDs and other associated values from an existing VPC in which you want to place your new Aurora MySQL cluster.
- Credentials Management: Utilizes Secrets Manager to store the username/password for the database so credentials are stored following best practices.
3\. Output and Management:
Outputs are crucial for retrieving connection details and managing the cluster post-deployment, as well as for passing runtime values between modules if necessary.
# Outputs - outputs.tf
output "aurora_cluster_endpoint" {
description = "The endpoint at which the Aurora cluster is accessible"
value = aws_rds_cluster.aurora.endpoint
}
# Outputs - outputs.tf
output "aurora_cluster_reader_endpoint" {
description = "The reader endpoint for the Aurora cluster"
value = aws_rds_cluster.aurora.reader_endpoint
}
4\. Conclusion
Using Terraform to deploy Amazon Aurora MySQL clusters simplifies the process, ensuring consistency and repeatability across environments. By leveraging modules for KMS customer encryption keys and the Aurora cluster’s fine-tuned configuration, the setup remains modular and easy to adjust. This solution provides AWS’s new built-in capabilities for creating and storing your MySQL master password natively within Secrets Manager. This approach supports best practices in cloud architecture, including security, scalability, and disaster recovery. Whether scaling out readers to handle an increased load or managing encryption keys securely, Terraform offers a robust solution for managing simple to the most complex AWS Aurora MySQL cluster configurations, spanning various use cases.
Do you still have questions about using my recommendations to deploy an Aurora MySQL cluster configured for your use case to your AWS environment using Terraform?
Reach out to us at DoiT International . Staffed exclusively with senior engineering talent, we specialize in providing advanced cloud consulting architectural design, debugging advice, and consulting services.