Skip to content

improve skip devices allocation for running pods #133452

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

daimaxiaxie
Copy link

What type of PR is this?

/kind bug

What this PR does / why we need it:

More stable pods when restarting kubelet.

Which issue(s) this PR is related to:

Fixes #133451

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. release-note-none Denotes a PR that doesn't merit a release note. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

Please note that we're already in Test Freeze for the release-1.34 branch. This means every merged PR will be automatically fast-forwarded via the periodic ci-fast-forward job to the release branch of the upcoming v1.34.0 release.

Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Sat Aug 9 04:20:07 UTC 2025.

@k8s-ci-robot k8s-ci-robot added do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. and removed do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @daimaxiaxie!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @daimaxiaxie. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: daimaxiaxie
Once this PR has been reviewed and has the lgtm label, please assign ffromani for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

klog.V(4).InfoS("Container not present in the initial running set", "podUID", podUID, "containerName", cntName, "containerID", cntID)
return false
}
found := false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your submission. We have indeed more evidence the original fix is incomplete, but I have reason to believe the actual problem is in the first part of this guard:

if !m.sourcesReady.AllReady() && m.isContainerAlreadyRunning(podUID, contName) {
klog.V(3).InfoS("container detected running, nothing to do", "deviceNumber", needed, "resourceName", resource, "podUID", podUID, "containerName", contName)
return nil, nil
}

In other words, I think we are misusing m.sourcesReady.AllReady() because AllReady() actually turns true when the sources are all connected, not when they have processed all the pods. We assumed the latter, turns out is actually the former (pending final verification). This explains why we see the failure happen not at every restart: we still have a race, albeit less likely (hopefully less likely?)

My initial thought is we can probably just write

	if m.isContainerAlreadyRunning(podUID, contName) {
		klog.V(3).InfoS("container detected running, nothing to do", "deviceNumber", needed, "resourceName", resource, "podUID", podUID, "containerName", contName)
		return nil, nil
	}

but this has to be very carefully validated.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, it seems that both AllReady and isContainerAlreadyRunning may have problems.

From my logs, it appears that I entered the isContainerAlreadyRunning function twice and found different results.

I0731 16:58:54.195904  909172 manager.go:1100] "container found in the initial set, assumed running" podUID="58cf408f-5297-4209-96a0-f2c367392151" containerName="app" containerID="f393bcca6c8028c74f8987165759a472b3085f4bdf26173eb3dbb4cbe1f6cc9d"
I0731 16:58:54.195953  909172 manager.go:1095] "container not present in the initial running set" podUID="58cf408f-5297-4209-96a0-f2c367392151" containerName="app" containerID="491ae36456e71a83a6c39270e364e85306f583f959b386660262898086462374"

From the #133382 perspective, it seems that AllReady also has problems, and I will also investigate it.

Copy link
Contributor

@ffromani ffromani Aug 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree isContainerAlreadyRunning can have bugs. The intention to use it was and is:

  1. create the initial set at startup, once
  2. check for presence during the allocation flow
    in other words, the set is supposed to be created onceusing the data from the container runtime and never mutated again, which greatly reduces the chance to introduce bugs.

In your observation, which flow can make the container running set inconsistent?

Copy link
Contributor

@ffromani ffromani Aug 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. There could be an inconsistency between containerMap and containerRunningSet, this is also what you mentioned in the commit message. Am I right?
Still is not very clear to me how the new code is more robust: it seems an equivalent rewrite with some different pros and cons.
The original code was written trying to be as defensive as possible and reuse as much as possible the existing data we collect anyway in kubelet. The intention was to make a minimal, as safe as possible incremental change because this flow (kubelet restart) is ancient, rarely touched and hard to test. The fact the code is hard to test is unfortunate and we should eventually rectify this.
It's possible the original intent backfired and led to suboptimal, still buggy implementation.

Copy link
Author

@daimaxiaxie daimaxiaxie Aug 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. There is an inconsistency between map and set. The map contains containers that have been stopped, which causes the same containerName to have two containerID.

@ffromani
Copy link
Contributor

ffromani commented Aug 9, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 9, 2025
@k8s-ci-robot
Copy link
Contributor

@daimaxiaxie: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gce ae511df link true /test pull-kubernetes-e2e-gce
pull-kubernetes-unit-windows-master ae511df link false /test pull-kubernetes-unit-windows-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@bart0sh bart0sh moved this from Triage to Work in progress in SIG Node: code and documentation PRs Aug 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. sig/node Categorizes an issue or PR as relevant to SIG Node. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
Development

Successfully merging this pull request may close these issues.

When kubelet restarts, admission still rejects pods
3 participants