-
Notifications
You must be signed in to change notification settings - Fork 887
fix(examples): use more precise example kubernetes template labels #14028
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(examples): use more precise example kubernetes template labels #14028
Conversation
All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
recheck |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the overall intent, but Kubernetes PVCs have a length limit of 63 characters. The workspace_instance
local will be 73 characters long as it consists of 2 UUIDs joined by a hyphen.
How about we instead use data.coder_workspace.me.id
to act as a stable identifier?
Good catch, I think |
I think you'll run into the same issue if you try to use that
https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/ |
…8s example template as per docs
fb0805a
to
bf1ea39
Compare
Sorry I didn't realise labels had the same restriction. I've changed everything to just use the workspace ID. |
@geniass, it looks like CLA is still not signed by you. This can happen if your GitHub email and your commuter email are different. |
recheck |
fb6fb0f
to
bf1ea39
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Smoke-tested on my personal deployment (v2.14.1)
The current example template doesn't have the selector labels to correct identify the pods created by the workspace kubernetes deployment.
I.e. when multiple workspaces are created from a template, the deployment resources incorrectly try to manage pods from the other workspace's deployment.
This fix also changes persistent resource names to use immutable IDs instead of names as per the docs: https://coder.com/docs/templates/resource-persistence