Skip to content

scheduler cache: Fix memory leak in scheduler cache management #133411

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ravisastryk
Copy link

@ravisastryk ravisastryk commented Aug 7, 2025

What type of PR is this?

/kind bug

What this PR does / why we need it:

This fixes a memory leak in the Kubernetes framework, in particular pods removal code path. It was caused by improper slice element removal in the removeFromSlice()
When pods were removed from NodeInfo slices (Pods, PodsWithAffinity, PodsWithRequiredAntiAffinity), the function was:

  1. Moving the last element to the deleted position
  2. Shrinking slice s = s[:len(s) - 1]
  3. But not clearing the moved element reference. This is the fix I am proposing to explicitly clear the reference.

This meant that even after pod gets deleted, the array still held references to PodInfo objects at positions beyond the slice length. This is preventing garbage collection(GC). With StatefulSets using PVCs and anti-affinity (which populate these slices), this creates a significant memory leak as pods are deleted but their PodInfo objects remain referenced in memory.

Which issue(s) this PR is related to:

Fixes #133365

Special notes for your reviewer:

  • This is a critical fix that resolves a memory leak affecting production scheduler deployments whoever is using kubernetes due to its wide adoption in the industry.
  • The fixes are based on the collaborative thoughts from the kubernetes community and I am making attempting to identify root cause and fix
  • I am proposing that we explicitly clear the reference after moving the element which will then allow the GC to properly clean up the deleted PodInfo objects.
go test -v ./pkg/scheduler/framework

Does this PR introduce a user-facing change?

NONE

Fix memory leak in kube-scheduler framework where assumed pods were not properly cleaned up during pod removal, causing memory usage hikes.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

N/A

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. labels Aug 7, 2025
@k8s-ci-robot
Copy link
Contributor

Please note that we're already in Test Freeze for the release-1.34 branch. This means every merged PR will be automatically fast-forwarded via the periodic ci-fast-forward job to the release branch of the upcoming v1.34.0 release.

Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Thu Aug 7 03:00:08 UTC 2025.

Copy link

linux-foundation-easycla bot commented Aug 7, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: ravisastryk / name: Ravi Sastry Kadali (58d6d89)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 7, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @ravisastryk!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 7, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @ravisastryk. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Aug 7, 2025
@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 7, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ravisastryk
Once this PR has been reviewed and has the lgtm label, please assign dom4ha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Aug 7, 2025
@ravisastryk ravisastryk marked this pull request as ready for review August 7, 2025 04:01
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 7, 2025
@k8s-ci-robot k8s-ci-robot requested a review from damemi August 7, 2025 04:02
@ravisastryk
Copy link
Author

/assign @ravisastryk

@AxeZhan
Copy link
Member

AxeZhan commented Aug 7, 2025

Aren't these two functions equivalent?

// Delete removes all items from the set.
func (s Set[T]) Delete(items ...T) Set[T] {
	for _, item := range items {
		delete(s, item)
	}
	return s
}

Seems set.Delete simply calls delete, it's just this function can handle multiple keys.

@ravisastryk
Copy link
Author

ravisastryk commented Aug 7, 2025

Aren't these two functions equivalent?

// Delete removes all items from the set.
func (s Set[T]) Delete(items ...T) Set[T] {
	for _, item := range items {
		delete(s, item)
	}
	return s
}

Seems set.Delete simply calls delete, it's just this function can handle multiple keys.

Thank you for your quick review @AxeZhan. You are right sets.Set[T], is just map[T]Empty under the hood. Both are functionally identical and apparently both deletion methods are same and the potential problem is not in this path.

I attempted couple of approaches in the cache management but does seem to be the root cause.

However, I discovered now that memory leak is potentially happening in the Kubernetes framework, in particular pods removal code path. It was caused by improper slice element removal in the removeFromSlice()
When pods were removed from NodeInfo slices (Pods, PodsWithAffinity, PodsWithRequiredAntiAffinity), the function was:

  1. Moving the last element to the deleted position
  2. Shrinking slice s = s[:len(s) - 1]
  3. But not clearing the moved element reference. This is the fix I am proposing to explicitly clear the reference.

@ravisastryk ravisastryk force-pushed the fix/scheduler-cache-memory-leak-assumedpods-cleanup branch from b3d7bcf to 73171fe Compare August 7, 2025 06:52
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 7, 2025
@ravisastryk ravisastryk force-pushed the fix/scheduler-cache-memory-leak-assumedpods-cleanup branch from 73171fe to 542d54e Compare August 7, 2025 06:54
@ravisastryk ravisastryk changed the title scheduler cache: Fix memory leak in scheduler cache using Sets.Delete() scheduler cache: Fix memory leak in scheduler cache management Aug 7, 2025
@ravisastryk ravisastryk force-pushed the fix/scheduler-cache-memory-leak-assumedpods-cleanup branch from 542d54e to e415eef Compare August 7, 2025 15:34
@k8s-ci-robot k8s-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 7, 2025
@ping035627
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 8, 2025
@ravisastryk
Copy link
Author

/retest

@ravisastryk ravisastryk force-pushed the fix/scheduler-cache-memory-leak-assumedpods-cleanup branch from e415eef to f51c404 Compare August 8, 2025 06:26
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Aug 8, 2025
@ravisastryk
Copy link
Author

/retest-required

…ete() instead

code review comments changesscheduler cache snapshot management

revert set deletion changes

improper slice element removal

skip if any pod info is nil

nil pointer dereference fix
@ravisastryk ravisastryk force-pushed the fix/scheduler-cache-memory-leak-assumedpods-cleanup branch from f51c404 to 58d6d89 Compare August 8, 2025 16:45
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 8, 2025
@k8s-ci-robot
Copy link
Contributor

@ravisastryk: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-unit 58d6d89 link true /test pull-kubernetes-unit
pull-kubernetes-e2e-kind 58d6d89 link true /test pull-kubernetes-e2e-kind
pull-kubernetes-e2e-gce 58d6d89 link true /test pull-kubernetes-e2e-gce
pull-kubernetes-integration 58d6d89 link true /test pull-kubernetes-integration

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

scheduler memory increases after uninstalled pods.
4 participants