-
-
Notifications
You must be signed in to change notification settings - Fork 45
Description
Description
The elastic_cache_cluster resource is being created without a numeric prefix when cluster_mode is disabled. This causes issues with external processes, seemingly related to AWS ElastiCache service updates, which appear to delete the cluster instance that lacks the -### suffix. This leads to a state mismatch where Terraform detects that the resource has been deleted and plans to recreate it, even though other nodes with the numeric suffix exist.
- ✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
- Re-initialize the project root to pull down modules:
terraform init
- Re-attempt your terraform plan or apply and check if the issue still persists
Versions
-
Module version [Required]:
-
Terraform version: 1.9.7
- Provider version(s): 5.25
Reproduction Code [Required]
Steps to reproduce the behavior:
workspaces: NO
cleared cache: YES
Example structure:
module "elasticache" {
source = "terraform-aws-modules/elasticache/aws"
version = "1.6.0" # Specify the module version
cluster_id = "test-redis"
engine = "redis"
family = "redis7"
major_engine_version = "7.x"
node_type = "cache.t3.medium"
# Key configuration to reproduce the issue
cluster_mode_enabled = false
num_cache_nodes = 3 # Example number of nodes
# Add other necessary configurations like VPC, subnets, security groups, etc.
# Ensure this is a complete and runnable example.
vpc_id = "vpc-xxxxxxxxxxxxxxxxx"
subnet_ids = ["subnet-xxxxxxxxxxxxxxxxx", "subnet-xxxxxxxxxxxxxxxxx"]
security_group_ids = ["sg-xxxxxxxxxxxxxxxxx"]
}
Steps to reproduce the behavior:
Initialize Terraform: terraform init
Apply the configuration: terraform apply
Wait for some time, potentially until an AWS ElastiCache service update or a similar event occurs.
Run a terraform plan.
Expected behavior
The module should create all ElastiCache cluster nodes with a consistent naming convention, including a numeric suffix (e.g., test-redis-001, test-redis-002, test-redis-003). This would prevent any external processes from mistakenly identifying one of the nodes as incorrectly configured and deleting it. The terraform plan should report no changes if the infrastructure is running as expected.
Actual behavior
One of the ElastiCache cluster nodes is created without a numeric suffix (e.g., test-redis), while the others are created with a suffix (e.g., test-redis-001, test-redis-002). An external process, likely related to AWS service maintenance, deletes the node that lacks the suffix.
Subsequently, terraform plan detects that the resource aws_elasticache_cluster.this[0] (with the name test-redis) has been deleted and proposes to recreate it, leading to a persistent "drift" in the infrastructure.
Terminal Output Screenshot(s)
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:
# module.elasticache.module.elasticachev2.aws_elasticache_cluster.this[0] has been deleted
- resource "aws_elasticache_cluster" "this" {
- arn = "arn:aws:elasticache:us-east-1:xxxxx:cluster:test-redis" -> null
- cache_nodes = [
- {
- address = "test-redis.test-redis.hzmjct.use1.cache.amazonaws.com"
- availability_zone = "us-east-1c"
- id = "0001"
- port = 6379
},
]
# ... other attributes
}