-
Notifications
You must be signed in to change notification settings - Fork 41.1k
kubelet: don't fetch image credentials if the image is present and if we don't need to check if the pod is allowed to pull it #133079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet: don't fetch image credentials if the image is present and if we don't need to check if the pod is allowed to pull it #133079
Conversation
… we don't need to check if the pod is allowed to pull it
Hi @atykhyy. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
@atykhyy: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@@ -178,6 +178,12 @@ func (m *imageManager) EnsureImageExists(ctx context.Context, objRef *v1.ObjectR | |||
return "", message, err | |||
} | |||
|
|||
if imageRef != "" && !utilfeature.DefaultFeatureGate.Enabled(features.KubeletEnsureSecretPulledImages) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change looks fine and likely restores the old behavior. The issue will still exist when the KubeletEnsureSecretPulledImages
feature gate is enabled by default.
@stlaz we should think about how to avoid calling credential provider plugins when the image is preloaded or the pull policy is Never
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @liggitt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue will still exist when the KubeletEnsureSecretPulledImages feature gate is enabled by default.
Yes. I made a note to myself to configure it to not verify when it goes GA.
avoid calling credential provider plugins when the image is preloaded
As I see it, this must rely on some kind of declared dependency between pods such that pod A being allowed to access an image means that pod B (somehow linked to A, perhaps with an annotation) is also allowed without additional checks. The upgrade workflow I stumbled upon this already requires a preflight daemonset, which runs while the previous version of the main daemonset is still running, to pull the new version image, because when the main daemonset pod stops, the kubelet won't be able to pull the new image any more than it will be able to acquire credentials. There is no problem with kubelet checking pod image accessibility for preflight daemonset pods.
It is of course possible to add the critical image to preloadedImagesVerificationAllowlist
, but that allows access to it to every pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Narrowly moving this if block up is ~fine for now, but I expect the credential verification feature to go to get and default-enable ~soon, at which point credential providers will be called again.
as credentials provider plugins may not be able to acquire credentials while the network management daemonset pod is being recreated
that seems like a potential deadlock issue with those credential plugins that needs resolving independent of the credential reverification feature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
credential plugins that needs resolving independent of the credential reverification feature
I.e. to authenticate to CR and pull the image? That's handled by preflight daemonsets, as I mentioned above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at which point credential providers will be called again
Yes, but that's not correct behavior, isn't it? E.g. if the image pull credential verification policy is NeverVerify, there is no reason to call them. I added an early check for decisions of this type: #133114
/triage accepted |
/lgtm |
LGTM label has been added. Git tree hash: 7aedc56db3836b3cfc20fe9fae2863404775e188
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: atykhyy, liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Some decisions about image pull credential verification can be applied without reference to image pull credentials: if the policy is NeverVerify, or if it is NeverVerifyPreloadedImages and the image is preloaded, or if is NeverVerifyAllowListedImages and the image is white-listed. In these cases, there is no need to look up credentials. Related PR: kubernetes#133079
Some decisions about image pull credential verification can be applied without reference to image pull credentials: if the policy is NeverVerify, or if it is NeverVerifyPreloadedImages and the image is preloaded, or if is NeverVerifyAllowListedImages and the image is white-listed. In these cases, there is no need to look up credentials. Related PR: kubernetes#133079
Some decisions about image pull credential verification can be applied without reference to image pull credentials: if the policy is NeverVerify, or if it is NeverVerifyPreloadedImages and the image is preloaded, or if is NeverVerifyAllowListedImages and the image is white-listed. In these cases, there is no need to look up credentials. Related PR: kubernetes#133079
Some decisions about image pull credential verification can be applied without reference to image pull credentials: if the policy is NeverVerify, or if it is NeverVerifyPreloadedImages and the image is preloaded, or if is NeverVerifyAllowListedImages and the image is white-listed. In these cases, there is no need to look up credentials. Related PR: kubernetes#133079
What type of PR is this?
/kind bug
What this PR does / why we need it:
Commit 3793bec changed kubelet's behavior such that it always looks up container registry credentials - even if the image is already present on the node, the image pull policy is
Never
orIfNotPresent
, and the experimental feature gateKubeletEnsureSecretPulledImages
is disabled or configured to not verify pod image access. This change creates a problem for network management solutions such as Cilium, as credentials provider plugins may not be able to acquire credentials while the network management daemonset pod is being recreated (e.g. after an update):This problem should not normally be fatal, because
DockerConfigProvider.Provide()
just returns an empty set of credentials when the credential provider fails. However, depending on DNS and other connection timeouts configured in the credential provider (a component external to kubelet), the new behavior leads to potentially long delays in network management daemonset pod startup, especially when such pods use many init containers to tighten up the main container's security context. In my testing, a standard Cilium daemonset with 6 init containers, a main agent container and a sidecar takes 3 minutes to restart (8 x 20s spent in the credential provider plugin waiting for the DNS request to time out, plus a few seconds for containers to actually run/start), during which interval the whole node is not fully functional.This PR restores the original behavior. There appears to be no reason to look up container registry credentials if they are not going to be used.
Which issue(s) this PR is related to:
N/A
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: