Skip to content

feat: expose prometheus port in helm chart #10448

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Nov 6, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Add helm test and fix template bug
  • Loading branch information
deansheather committed Nov 6, 2023
commit bdcc8dac72b20a4130aec330a4a366935ac7a42c
14 changes: 8 additions & 6 deletions docs/admin/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ The Prometheus endpoint can be enabled in the
[Helm chart's](https://github.com/coder/coder/tree/main/helm) `values.yml` by
setting the environment variable `CODER_PROMETHEUS_ADDRESS` to `0.0.0.0:2112`.
The environment variable `CODER_PROMETHEUS_ENABLE` will be enabled
automatically. A Service Endpoint will also be exposed allowing Prometheus Service Monitors to be used.
automatically. A Service Endpoint will also be exposed allowing Prometheus
Service Monitors to be used.

### Prometheus configuration

Expand All @@ -53,8 +54,9 @@ scrape_configs:
apps: "coder"
```

To use the Service Endpoint for prometheus to scrape the metrics, you can create a
service monitor. Below is an example: `coder-service-monitor`:
To use the Kubernetes Prometheus operator to scrape metrics, you will need to
create a `ServiceMonitor` in your Coder deployment namespace. Below is an
example `ServiceMonitor`:

```yaml
apiVersion: monitoring.coreos.com/v1
Expand All @@ -64,9 +66,9 @@ metadata:
namespace: coder
spec:
endpoints:
- port: prometheus-http
interval: 10s
scrapeTimeout: 10s
- port: prometheus-http
interval: 10s
scrapeTimeout: 10s
selector:
matchLabels:
app.kubernetes.io/name: coder
Expand Down
10 changes: 8 additions & 2 deletions helm/coder/templates/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,19 @@ spec:
{{- end }}
{{- range .Values.coder.env }}
{{- if eq .name "CODER_PROMETHEUS_ENABLE" }}
{{/*
This sadly has to be nested to avoid evaluating the second part
of the condition too early and potentially getting type errors if
the value is not a string (like a `valueFrom`). We do not support
`valueFrom` for this env var specifically.
*/}}
{{- if eq .value "true" }}
- name: "prometheus-http"
port: 2112
targetPort: "prometheus-http"
protocol: TCP
{{ if eq .Values.coder.service.type "NodePort" }}
nodePort: {{ .Values.coder.service.prometheusNodePort }}
{{ if eq $.Values.coder.service.type "NodePort" }}
nodePort: {{ $.Values.coder.service.prometheusNodePort }}
{{ end }}
{{- end }}
{{- end }}
Expand Down
9 changes: 8 additions & 1 deletion helm/coder/tests/chart_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,10 @@ var testCases = []testCase{
name: "extra_templates",
expectedError: "",
},
{
name: "prometheus",
expectedError: "",
},
}

type testCase struct {
Expand Down Expand Up @@ -158,7 +162,10 @@ func TestUpdateGoldenFiles(t *testing.T) {

valuesPath := tc.valuesFilePath()
templateOutput, err := runHelmTemplate(t, helmPath, "..", valuesPath)

if err != nil {
t.Logf("error running `helm template -f %q`: %v", valuesPath, err)
t.Logf("output: %s", templateOutput)
}
require.NoError(t, err, "failed to run `helm template -f %q`", valuesPath)

goldenFilePath := tc.goldenFilePath()
Expand Down
204 changes: 204 additions & 0 deletions helm/coder/tests/testdata/prometheus.golden
Original file line number Diff line number Diff line change
@@ -0,0 +1,204 @@
---
# Source: coder/templates/coder.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations: {}
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: coder
app.kubernetes.io/part-of: coder
app.kubernetes.io/version: 0.1.0
helm.sh/chart: coder-0.1.0
name: coder
---
# Source: coder/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: coder-workspace-perms
rules:
- apiGroups: [""]
resources: ["pods"]
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
---
# Source: coder/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "coder"
subjects:
- kind: ServiceAccount
name: "coder"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: coder-workspace-perms
---
# Source: coder/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: coder
labels:
helm.sh/chart: coder-0.1.0
app.kubernetes.io/name: coder
app.kubernetes.io/instance: release-name
app.kubernetes.io/part-of: coder
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
{}
spec:
type: NodePort
sessionAffinity: None
ports:
- name: "http"
port: 80
targetPort: "http"
protocol: TCP

nodePort:


- name: "prometheus-http"
port: 2112
targetPort: "prometheus-http"
protocol: TCP

nodePort: 31112

selector:
app.kubernetes.io/name: coder
app.kubernetes.io/instance: release-name
---
# Source: coder/templates/coder.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: coder
app.kubernetes.io/part-of: coder
app.kubernetes.io/version: 0.1.0
helm.sh/chart: coder-0.1.0
name: coder
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/name: coder
template:
metadata:
annotations: {}
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: coder
app.kubernetes.io/part-of: coder
app.kubernetes.io/version: 0.1.0
helm.sh/chart: coder-0.1.0
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
values:
- coder
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- args:
- server
command:
- /opt/coder
env:
- name: CODER_HTTP_ADDRESS
value: 0.0.0.0:8080
- name: CODER_PROMETHEUS_ADDRESS
value: 0.0.0.0:2112
- name: CODER_ACCESS_URL
value: http://coder.default.svc.cluster.local
- name: KUBE_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CODER_DERP_SERVER_RELAY_URL
value: http://$(KUBE_POD_IP):8080
- name: CODER_PROMETHEUS_ENABLE
value: "true"
image: ghcr.io/coder/coder:latest
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
name: coder
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 2112
name: prometheus-http
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: null
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts: []
restartPolicy: Always
serviceAccountName: coder
terminationGracePeriodSeconds: 60
volumes: []
9 changes: 9 additions & 0 deletions helm/coder/tests/testdata/prometheus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
coder:
image:
tag: latest
service:
type: NodePort
prometheusNodePort: 31112
env:
- name: CODER_PROMETHEUS_ENABLE
value: "true"
17 changes: 10 additions & 7 deletions helm/coder/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -259,15 +259,18 @@ coder:
# coder.service.annotations -- The service annotations. See:
# https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
annotations: {}
# coder.service.httpNodePort -- Enabled if coder.service.type is set to NodePort.
# If not set, Kubernetes will allocate a port from the default range, 30000-32767.
# coder.service.httpNodePort -- Enabled if coder.service.type is set to
# NodePort. If not set, Kubernetes will allocate a port from the default
# range, 30000-32767.
httpNodePort: ""
# coder.service.httpsNodePort -- Enabled if coder.service.type is set to NodePort.
# If not set, Kubernetes will allocate a port from the default range, 30000-32767.
# coder.service.httpsNodePort -- Enabled if coder.service.type is set to
# NodePort. If not set, Kubernetes will allocate a port from the default
# range, 30000-32767.
httpsNodePort: ""
# Prometheus service port is only exposed if CODER_PROMETHEUS_ENABLE=true in the env section.
# coder.service.prometheusNodePort -- Enabled if coder.service.type is set to NodePort.
# If not set, Kubernetes will allocate a port from the default range, 30000-32767.
# coder.service.prometheusNodePort -- Enabled if coder.service.type is set
# to NodePort. If not set, Kubernetes will allocate a port from the default
# range, 30000-32767. The "prometheus-http" port on the coder service is
# only exposed if CODER_PROMETHEUS_ENABLE is set to true.
prometheusNodePort: ""

# coder.ingress -- The Ingress object to expose for Coder.
Expand Down