-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Description
What happened:
Hey 👋
After upgrading to Helm chart version 4.13.0 the defaultbackend
pods started crashing with the following error:
Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process:
error during container init: exec: "/server-arm64": permission denied
The defaultbackend
is using the registry.k8s.io/defaultbackend-arm64
image with the default tag 1.5
. Specifically, the image digest is: d96c10f06fb8ccc90f6204e6e2f3cd58798e4ac08ca9ac85fbfedad00fcc8917
.
We have both, amd64 and arm64 clusters. The issue only occurs on the arm64 nodes; everything works as expected on amd64.
What you expected to happen:
The defaultbackend
pod should start and run without crashing using the Helm chart version 4.13.0.
What do you think went wrong?:
We suspect the issue is related to the default defaultbackend
container runAsGroup option. Manually removing the runAsGroup
option from the defaultbackend
deployment allows the container to start successfully.
Additionally, in our testing, the defaultbackend
deploys and runs correctly using chart version 4.13.0 if we override the image tag to use the previous version 1.4.
Based on this, we believe that the 1.5 tag of the arm64 image (registry.k8s.io/defaultbackend-arm64
) may not be compatible with the current runAsGroup
settings in the Helm chart.
NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version
): v1.13.0
Kubernetes version (use kubectl version
):
Client Version: v1.32.4
Kustomize Version: v5.5.0
Server Version: v1.33.2-eks-931bdca
Environment:
- Cloud provider or hardware configuration: AWS EKS
- OS (e.g. from /etc/os-release): Bottlerocket OS 1.44.0
- Kernel (e.g.
uname -a
):Linux ip-<IP>.eu-west-1.compute.internal 6.12.37 #1 SMP Thu Jul 24 23:20:53 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
- Install tools: Helm chart
- Basic cluster related info:
kubectl get nodes -o wide
:NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-<IP>.eu-west-1.compute.internal Ready <none> 28m v1.33.1-eks-b9364f6 <INTERNAL-IP> <none> Bottlerocket OS 1.44.0 (aws-k8s-1.33) 6.12.37 containerd://2.0.5+bottlerocket [...]
How to reproduce this issue:
Anything else we need to know:
Metadata
Metadata
Assignees
Labels
Type
Projects
Status