You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When doing a user_data update or an AMI update on my node group which has min: 1, max: 1 and desired_size: 1 the ASG will spin 5 more instances for absolutely no apparent reason.
✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
Versions
Module version [Required]: 20.36.0
Terraform version: opentofu 1.9.0
Provider version(s): aws 5.95.0
Reproduction Code [Required]
Pasted above
Steps to reproduce the behavior:
Expected behavior
I'm expecting EKS to create another single node and then delete the old. Not create 6 nodes and then delete 5
Actual behavior
This is from a change in my pre_bootstrap_user_data I'm doing:
Status
Instance ID
UTC Time
Local Start Time
Local End Time
Log Details
✅ Launch
i-07cd11bde9c978d4f
2025-04-23T06:50:49Z
2025 April 23, 01:51:01 PM +07:00
2025 April 23, 01:52:10 PM +07:00
At 2025-04-23T06:50:49Z a user request update of AutoScalingGroup constraints to min: 1, max: 2, desired: 2 changing the desired capacity from 1 to 2. At 2025-04-23T06:50:59Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 1 to 2.
✅ Launch
i-0879b60cf3b843448
2025-04-23T06:52:53Z
2025 April 23, 01:52:55 PM +07:00
2025 April 23, 01:54:07 PM +07:00
At 2025-04-23T06:52:53Z a user request update of AutoScalingGroup constraints to min: 1, max: 3, desired: 3 changing the desired capacity from 2 to 3. At 2025-04-23T06:52:53Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 2 to 3.
✅ Launch
i-008676abad9cb498d
2025-04-23T06:54:56Z
2025 April 23, 01:55:10 PM +07:00
2025 April 23, 01:56:20 PM +07:00
At 2025-04-23T06:54:56Z a user request update of AutoScalingGroup constraints to min: 1, max: 4, desired: 4 changing the desired capacity from 3 to 4. At 2025-04-23T06:55:08Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 3 to 4.
✅ Launch
i-0d91cf39d55ce4819
2025-04-23T06:57:00Z
2025 April 23, 01:57:15 PM +07:00
2025 April 23, 01:58:24 PM +07:00
At 2025-04-23T06:57:00Z a user request update of AutoScalingGroup constraints to min: 1, max: 5, desired: 5 changing the desired capacity from 4 to 5. At 2025-04-23T06:57:13Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 4 to 5.
✅ Launch
i-00522e75bfa472fb5
2025-04-23T06:59:03Z
2025 April 23, 01:59:09 PM +07:00
2025 April 23, 02:00:18 PM +07:00
At 2025-04-23T06:59:03Z a user request update of AutoScalingGroup constraints to min: 1, max: 6, desired: 6 changing the desired capacity from 5 to 6. At 2025-04-23T06:59:07Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 5 to 6.
✅ Terminate
i-07ce15c6c067ce5f7
2025-04-23T07:04:49Z
2025 April 23, 02:04:49 PM +07:00
2025 April 23, 02:05:33 PM +07:00
At 2025-04-23T07:04:49Z instance i-07ce15c6c067ce5f7 was taken out of service in response to a user request, shrinking the capacity from 6 to 5.
✅ Terminate
i-0d91cf39d55ce4819
2025-04-23T07:05:41Z
2025 April 23, 02:05:51 PM +07:00
2025 April 23, 02:07:36 PM +07:00
At 2025-04-23T07:05:41Z a user request update of AutoScalingGroup constraints to min: 1, max: 5, desired: 4 changing the desired capacity from 5 to 4. At 2025-04-23T07:05:51Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 5 to 4. At 2025-04-23T07:05:51Z instance i-0d91cf39d55ce4819 was selected for termination.
✅ Terminate
i-008676abad9cb498d
2025-04-23T07:07:43Z
2025 April 23, 02:07:45 PM +07:00
2025 April 23, 02:09:30 PM +07:00
At 2025-04-23T07:07:43Z a user request update of AutoScalingGroup constraints to min: 1, max: 4, desired: 3 changing the desired capacity from 4 to 3. At 2025-04-23T07:07:45Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 4 to 3. At 2025-04-23T07:07:45Z instance i-008676abad9cb498d was selected for termination.
✅ Terminate
i-00522e75bfa472fb5
2025-04-23T07:09:45Z
2025 April 23, 02:09:50 PM +07:00
2025 April 23, 02:11:35 PM +07:00
At 2025-04-23T07:09:45Z a user request update of AutoScalingGroup constraints to min: 1, max: 3, desired: 2 changing the desired capacity from 3 to 2. At 2025-04-23T07:09:50Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 3 to 2. At 2025-04-23T07:09:50Z instance i-00522e75bfa472fb5 was selected for termination.
✅ Terminate
i-0879b60cf3b843448
2025-04-23T07:11:46Z
2025 April 23, 02:11:55 PM +07:00
(not specified)
At 2025-04-23T07:11:46Z a user request update of AutoScalingGroup constraints to min: 1, max: 2, desired: 1 changing the desired capacity from 2 to 1. At 2025-04-23T07:11:55Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 2 to 1. At 2025-04-23T07:11:55Z instance i-0879b60cf3b843448 was selected for termination.
The text was updated successfully, but these errors were encountered:
oh I see. I didn't know that detail. Is there a way to tune it to make user data deployments faster? If not that's fine I was just wondering what was going on
Description
When doing a user_data update or an AMI update on my node group which has
min: 1
,max: 1
anddesired_size: 1
the ASG will spin 5 more instances for absolutely no apparent reason.Before you submit an issue, please perform the following first:
Versions
Module version [Required]:
20.36.0
Terraform version:
opentofu 1.9.0
Provider version(s):
aws 5.95.0
Reproduction Code [Required]
Pasted above
Steps to reproduce the behavior:
Expected behavior
I'm expecting EKS to create another single node and then delete the old. Not create 6 nodes and then delete 5
Actual behavior
This is from a change in my
pre_bootstrap_user_data
I'm doing:The text was updated successfully, but these errors were encountered: