Skip to content

PodStartSLIDuration excludes init container runtime and excludes stateful pods #131950

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

alimaazamat
Copy link

@alimaazamat alimaazamat commented May 23, 2025

What type of PR is this?

/kind bug

What this PR does / why we need it:

PodStartSLIDuration should not include init container run time and only consider stateless pods according to https://github.com/kubernetes/community/blob/master/sig-scalability/slos/pod_startup_latency.md

How this was tested:

kind create cluster --name $CLUSTER_NAME
# rebuilt kubelet for Linux because kind runs linux containers
GOOS=linux GOARCH=amd64 go build -o _output/kubelet ./cmd/kubelet
# deploy custom kubelet to node
docker cp _output/kubelet ${CLUSTER_NAME}-control-plane:/usr/bin/kubelet
docker exec ${CLUSTER_NAME}-control-plane systemctl restart kubelet
kubectl apply -f stateless-stateful-test.yaml
# get metrics
kubectl get --raw="/api/v1/nodes/${CLUSTER_NAME}-control-plane/proxy/metrics" | grep "kubelet_pod_start_sli_duration_seconds"

If you want to test another deployment and reset metrics:

kubectl delete deployment stateless-test stateful-test
#reset metrics by restarting kubelet
docker exec ${CLUSTER_NAME}-control-plane systemctl restart kubelet
kubectl apply -f init-container-test.yaml
# get metrics
kubectl get --raw="/api/v1/nodes/${CLUSTER_NAME}-control-plane/proxy/metrics" | grep "kubelet_pod_start_sli_duration_seconds"

Tested this with 5 stateless and 10 stateful pods with the stateless-stateful-test.yaml deployment like:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stateless-test
  labels:
    app: stateless-test
spec:
  replicas: 5
  selector:
    matchLabels:
      app: stateless-test
  template:
    metadata:
      labels:
        app: stateless-test
    spec:
      initContainers:
      - name: init-container
        image: busybox:1.36
        imagePullPolicy: Always
        command: ["sh", "-c", " sleep 10"]
      containers:
      - name: main-container
        image: nginx:1.25-alpine 
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        # No persistent volumes - this is stateless
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stateful-test
  labels:
    app: stateful-test
spec:
  replicas: 10
  selector:
    matchLabels:
      app: stateful-test
  template:
    metadata:
      labels:
        app: stateful-test
    spec:
      initContainers:
      - name: init-container
        image: alpine:3.19
        imagePullPolicy: Always
        command: ["sh", "-c", "sleep 10"]
      containers:
      - name: main-container
        image: nginx:1.25-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        volumeMounts:
        - name: persistent-storage
          mountPath: /data
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: test-pvc

and got the correct behavior of kubelet_pod_start_sli_duration_seconds_count 5 So it correctly counts only 5 stateless pods.

Tested this with init container sleep time of 10 seconds with a init-container-test.yaml deployment like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: init-container-test
spec:
  replicas: 50
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      nodeName: $YOUR_NODE_NAME
      initContainers:
      - name: init
        image: busybox
        command: ["sh", "-c", "sleep 10"]
      containers:
      - name: test
        image: k8s.gcr.io/pause:3.9

and got the correct behavior of

kubelet_pod_start_sli_duration_seconds_bucket{le="0.5"} 0
kubelet_pod_start_sli_duration_seconds_bucket{le="1"} 5
kubelet_pod_start_sli_duration_seconds_bucket{le="2"} 36
kubelet_pod_start_sli_duration_seconds_bucket{le="3"} 47
kubelet_pod_start_sli_duration_seconds_bucket{le="4"} 50
kubelet_pod_start_sli_duration_seconds_bucket{le="5"} 50

Where we are correctly incrementing in buckets less than le="10".

Which issue(s) this PR fixes:

Fixes #131733

Special notes for your reviewer:

Does this PR introduce a user-facing change?

kubelet_pod_start_sli_duration_seconds_bucket metric now matches pod startup latency SLI/SLO documentation.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 23, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot
Copy link
Contributor

Welcome @alimaazamat!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 23, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @alimaazamat. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label May 23, 2025
@k8s-ci-robot k8s-ci-robot requested review from feiskyer and kannon92 May 23, 2025 23:28
@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. labels May 23, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 23, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: alimaazamat
Once this PR has been reviewed and has the lgtm label, please assign random-liu for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@alimaazamat alimaazamat changed the title [WIP]: initContainerDuration SLI created and subtracted from podStartSLOdura… [WIP]: PodStartSLIDuration excludes init container runtime May 23, 2025
@alimaazamat alimaazamat force-pushed the subtract-init-container-runtime-from-slo branch 2 times, most recently from b606046 to 84042ad Compare May 23, 2025 23:47
@alimaazamat
Copy link
Author

/sig node

@alimaazamat
Copy link
Author

alimaazamat commented May 28, 2025

Waiting for feedback from Sig Node before proceeding...
We expect additional changes to be added (excluding init container runtime, excluding some stateful pods, and excluding schedulable pods)
SLO definition doesn't match the formula being used, but perhaps the definition needs to be updated.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels May 30, 2025
@alimaazamat
Copy link
Author

/retest

@alimaazamat alimaazamat force-pushed the subtract-init-container-runtime-from-slo branch 2 times, most recently from e3021d0 to 2cd2f30 Compare July 10, 2025 23:59
@alimaazamat
Copy link
Author

/retest

@alimaazamat
Copy link
Author

/test pull-kubernetes-e2e-kind-ipv6

@alimaazamat
Copy link
Author

/retest

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 22, 2025
@alimaazamat alimaazamat force-pushed the subtract-init-container-runtime-from-slo branch from 4e25a34 to d7accc6 Compare July 25, 2025 00:20
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 25, 2025
@alimaazamat
Copy link
Author

/retest

@alimaazamat alimaazamat changed the title [WIP]: PodStartSLIDuration excludes init container runtime PodStartSLIDuration excludes init container runtime and excludes stateful pods Jul 25, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 25, 2025
@jackfrancis
Copy link
Contributor

cc @haircommander, this is the metrics fix/change mentioned in SIG Node today

cc @dgrisonnet, is this something you'd have thoughts on?

@dgrisonnet
Copy link
Member

PodStartSLIDuration should not include init container run time and only consider stateless pods

I think that this statement makes sense. Excluding all the external factors that could impact pod startup will make the PodStartSLIDuration metric more accurate to define SLOs for kubelet.

@alimaazamat
Copy link
Author

alimaazamat commented Aug 5, 2025

@dgrisonnet Thank you for your comment, fixed accordingly (also put state.metricRecorded = true at the end after all possible metrics could be recorded)! PTAL!

@dgrisonnet
Copy link
Member

Perfect, thanks.

This looks good to me from a sig-instrumentation standpoint.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 6, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: c51b8f2afb23294dfc6aefdf39399ac90b642dd2

@jackfrancis
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 6, 2025
@k8s-ci-robot
Copy link
Contributor

@alimaazamat: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-cmd 89336bb link true /test pull-kubernetes-cmd
pull-kubernetes-e2e-kind 89336bb link true /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 89336bb link true /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-gce 89336bb link true /test pull-kubernetes-e2e-gce
pull-kubernetes-integration 89336bb link true /test pull-kubernetes-integration
pull-kubernetes-verify 89336bb link true /test pull-kubernetes-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
Development

Successfully merging this pull request may close these issues.

kubelet_pod_start_sli_duration_seconds appears not to match its specification, at least as far as excluding init container runtime
6 participants