You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating an EKS cluster with Bottlerocket AMI nodes using the EKS module and managed node groups, I observed that two launch templates are being created. Although the EKS node group correctly references the template with the tag eks:cluster-name set to swi-dev-3-bottlerocket, the auto scaling group ends up referencing a launch template with eks:nodegroup-name as fyn-nodegroup-20250407060414008800000003. As a result, the EKS nodes fail to join the cluster and do not appear in the EKS nodes section.
I am including the Terraform variables and the relevant plan output for context.
Both the EKS node group and the associated auto scaling group should reference the same launch template. Specifically, the launch template used should consistently have eks:cluster-name set to swi-dev-3-bottlerocket to ensure that the nodes join the cluster as expected.
Actual Behavior
Two different launch templates are created.
The EKS node group correctly references the launch template with eks:cluster-name as swi-dev-3-bottlerocket.
The auto scaling group, however, references a launch template with eks:nodegroup-name as fyn-nodegroup-20250407060414008800000003.
This mismatch prevents the nodes from joining the EKS cluster.
Module & Environment Information
Module Version: 20.33.1
Terraform Version: v1.11.1
AWS Provider Version: 5.84.0
Steps to Reproduce
Use the provided Terraform variables to configure the EKS module with Bottlerocket AMI nodes.
Run terraform plan and note that two launch templates are scheduled for creation.
Apply the configuration.
Verify using the AWS CLI (aws ec2 describe-launch-templates --filters "Name=tag:eks:cluster-name,Values=swi-dev-3-bottlerocket") that the auto scaling group is referencing a launch template with eks:nodegroup-name set to fyn-nodegroup-20250407060414008800000003 instead of the expected swi-dev-3-bottlerocket.
Check the EKS console and observe that the nodes do not join the cluster.
Additional Context
This issue appears to be related to the custom launch template configuration within the EKS module. The mismatch between the launch template referenced by the EKS node group and the one used by the auto scaling group results in nodes not being recognized by the EKS cluster. Any insights or fixes to ensure that both resources consistently use the same launch template would be greatly appreciated.
load_config_file = false # Avoids conflicts with kubectl's default config
}
Fetch EKS cluster details
data "aws_eks_cluster" "cluster" {
depends_on = [module.eks] # Ensure the module is created before fetching data
name = var.cluster_name
}
Get EKS authentication token
data "aws_eks_cluster_auth" "cluster" {
depends_on = [module.eks] # Ensure dependency on EKS module
name = var.cluster_name
}
`
The text was updated successfully, but these errors were encountered:
jagtapa
changed the title
Mismatched Launch Template Reference for EKS Nodegroup – Auto Scaling Group Uses Incorrect Template
Mismatched Launch Template Reference for EKS Nodegroup – Auto Scaling Group Uses Incorrect Launcher Template
Apr 7, 2025
please familiarize yourself with the service - when using a custom launch template, the values provided on the custom launch template are merged with a LT created by EKS MNG and that is why you see 2 launch templates. The LT that is used by the underlying ASG is the one created by EKS MNG
Description
When creating an EKS cluster with Bottlerocket AMI nodes using the EKS module and managed node groups, I observed that two launch templates are being created. Although the EKS node group correctly references the template with the tag
eks:cluster-name
set toswi-dev-3-bottlerocket
, the auto scaling group ends up referencing a launch template witheks:nodegroup-name
asfyn-nodegroup-20250407060414008800000003
. As a result, the EKS nodes fail to join the cluster and do not appear in the EKS nodes section.I am including the Terraform variables and the relevant plan output for context.
Terraform Variables
Terraform Plan Output
After running a Terraform plan, the following resources are scheduled for creation:
EKS Node Group Resource:
Launch Template Resource:
After applying, running:
aws ec2 describe-launch-templates \ --filters "Name=tag:eks:cluster-name,Values=swi-dev-3-bottlerocket" \ --output table
returns a launch template with:
Expected Behavior
Both the EKS node group and the associated auto scaling group should reference the same launch template. Specifically, the launch template used should consistently have
eks:cluster-name
set toswi-dev-3-bottlerocket
to ensure that the nodes join the cluster as expected.Actual Behavior
eks:cluster-name
asswi-dev-3-bottlerocket
.eks:nodegroup-name
asfyn-nodegroup-20250407060414008800000003
.Module & Environment Information
Steps to Reproduce
terraform plan
and note that two launch templates are scheduled for creation.aws ec2 describe-launch-templates --filters "Name=tag:eks:cluster-name,Values=swi-dev-3-bottlerocket"
) that the auto scaling group is referencing a launch template witheks:nodegroup-name
set tofyn-nodegroup-20250407060414008800000003
instead of the expectedswi-dev-3-bottlerocket
.Additional Context
This issue appears to be related to the custom launch template configuration within the EKS module. The mismatch between the launch template referenced by the EKS node group and the one used by the auto scaling group results in nodes not being recognized by the EKS cluster. Any insights or fixes to ensure that both resources consistently use the same launch template would be greatly appreciated.
provider.tf
`terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.84.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.19.0"
}
}
}
provider "aws" {
region = "ap-south-1"
}
provider "kubernetes" {
config_path = "~/.kube/config"
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
}
provider "kubectl" {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
load_config_file = false # Avoids conflicts with kubectl's default config
}
Fetch EKS cluster details
data "aws_eks_cluster" "cluster" {
depends_on = [module.eks] # Ensure the module is created before fetching data
name = var.cluster_name
}
Get EKS authentication token
data "aws_eks_cluster_auth" "cluster" {
depends_on = [module.eks] # Ensure dependency on EKS module
name = var.cluster_name
}
`
The text was updated successfully, but these errors were encountered: