@@ -21,7 +21,7 @@ potentially optimize within the template.
21
21
22
22
![ Screenshot of a workspace and its build timeline] ( ../../images/best-practice/build-timeline.png )
23
23
24
- Adjust this request to match your Coder access URL and workspace :
24
+ You can also retrieve this detail programmatically from the API :
25
25
26
26
``` shell
27
27
curl -X GET https://coder.example.com/api/v2/workspacebuilds/{workspacebuild}/timings \
@@ -36,9 +36,9 @@ for more information.
36
36
### Coder Observability Chart
37
37
38
38
Use the [ Observability Helm chart] ( https://github.com/coder/observability ) for a
39
- pre-built set of dashboards to monitor your control plane over time. It includes
40
- Grafana, Prometheus, Loki, and Alert Manager out-of-the-box, and can be deployed
41
- on your existing Grafana instance .
39
+ pre-built set of dashboards to monitor your Coder deployments over time. It
40
+ includes pre-configured instances of Grafana, Prometheus, Loki, and Alertmanager
41
+ to ingest and display key observability data .
42
42
43
43
We recommend that all administrators deploying on Kubernetes or on an existing
44
44
Prometheus or Grafana stack set the observability bundle up with the control
@@ -48,40 +48,44 @@ or our [Kubernetes installation guide](../../install/kubernetes.md).
48
48
49
49
### Enable Prometheus metrics for Coder
50
50
51
- [ Prometheus.io] ( https://prometheus.io/docs/introduction/overview/#what-is-prometheus )
52
- is included as part of the [ observability chart] ( #coder-observability-chart ) . It
53
- offers a variety of
54
- [ available metrics] ( ../../admin/integrations/prometheus.md#available-metrics ) ,
51
+ Coder exposes a variety of
52
+ [ application metrics] ( ../../admin/integrations/prometheus.md#available-metrics ) ,
55
53
such as ` coderd_provisionerd_job_timings_seconds ` and
56
- ` coderd_agentstats_startup_script_seconds ` , which measure how long the workspace
57
- takes to provision and how long the startup script takes .
54
+ ` coderd_agentstats_startup_script_seconds ` , which measure how long the
55
+ workspaces take to provision and how long the startup scripts take .
58
56
59
- You can
60
- [ install it separately] ( https://prometheus.io/docs/prometheus/latest/getting_started/ )
61
- if you prefer.
57
+ To make use of these metrics, you will need to
58
+ [ enable Prometheus metrics] ( ../../admin/integrations/prometheus.md#enable-prometheus-metrics )
59
+ exposition.
60
+
61
+ If you are not using the [ Observability Chart] ( #coder-observability-chart ) , you
62
+ will need to install Prometheus and configure it to scrape the metrics from your
63
+ Coder installation.
62
64
63
65
## Provisioners
64
66
65
- ` coder server ` defaults to three provisioner daemons. Each provisioner daemon
66
- can handle one single job, such as start, stop, or delete at a time and can be
67
- resource intensive. When all provisioners are busy, workspaces enter a "pending"
68
- state until a provisioner becomes available.
67
+ ` coder server ` by default provides three built-in provisioner daemons
68
+ (controlled by the
69
+ [ ` CODER_PROVISIONER_DAEMONS ` ] ( ../../reference/cli/server.md#--provisioner-daemons )
70
+ config option). Each provisioner daemon can handle one single job (such as
71
+ start, stop, or delete) at a time and can be resource intensive. When all
72
+ provisioners are busy, workspaces enter a "pending" state until a provisioner
73
+ becomes available.
69
74
70
75
### Increase provisioner daemons
71
76
72
77
Provisioners are queue-based to reduce unpredictable load to the Coder server.
73
- However, they can be scaled up to allow more concurrent provisioners. You risk
74
- overloading the central Coder server if you use too many built-in provisioners,
75
- so we recommend a maximum of five provisioners. For more than five provisioners,
76
- we recommend that you move to
77
- [ external provisioners] ( ../../admin/provisioners.md ) .
78
-
79
- If you can’t move to external provisioners, use the ` provisioner-daemons ` flag
80
- to increase the number of provisioner daemons to five:
81
-
82
- ``` shell
83
- coder server --provisioner-daemons=5
84
- ```
78
+ If you require a higher bandwidth of provisioner jobs, you can do so by
79
+ increasing the
80
+ [ ` CODER_PROVISIONER_DAEMONS ` ] ( ../../reference/cli/server.md#--provisioner-daemons )
81
+ config option.
82
+
83
+ You risk overloading Coder if you use too many built-in provisioners, so we
84
+ recommend a maximum of five built-in provisioners per ` coderd ` replica. For more
85
+ than five provisioners, we recommend that you move to
86
+ [ External Provisioners] ( ../../admin/provisioners.md ) and also consider
87
+ [ High Availability] ( ../../admin/networking/high-availability.md ) to run multiple
88
+ ` coderd ` replicas.
85
89
86
90
Visit the
87
91
[ CLI documentation] ( ../../reference/cli/server.md#--provisioner-daemons ) for
@@ -116,21 +120,28 @@ for more information.
116
120
117
121
## Set up Terraform provider caching
118
122
119
- By default, Coder downloads each Terraform provider when a workspace starts.
120
- This can create unnecessary network and disk I/O.
123
+ ### Template lock file
124
+
125
+ On each workspace build, Terraform will examine the providers used by the
126
+ template and attempt to download the latest version of each provider unless it
127
+ is constrained to a specific version. Terraform exposes a mechanism to build a
128
+ static list of provider versions, which improves cacheability.
129
+
130
+ Without caching, Terraform will download each provider on each build, and this
131
+ can create unnecessary network and disk I/O.
121
132
122
133
` terraform init ` generates a ` .terraform.lock.hcl ` which instructs Coder
123
134
provisioners to cache specific versions of your providers.
124
135
125
- To use ` terraform init ` to cache providers :
136
+ To use ` terraform init ` to build the static provider version list :
126
137
127
- 1 . Pull the templates to your local device:
138
+ 1 . Pull your template to your local device:
128
139
129
140
``` shell
130
- coder templates pull
141
+ coder templates pull < template >
131
142
```
132
143
133
- 1 . Run ` terraform init ` to initialize the directory :
144
+ 1 . Run ` terraform init ` inside the template directory to build the lock file :
134
145
135
146
``` shell
136
147
terraform init
@@ -139,5 +150,19 @@ To use `terraform init` to cache providers:
139
150
1 . Push the templates back to your Coder deployment:
140
151
141
152
``` shell
142
- coder templates push
153
+ coder templates push < template >
143
154
```
155
+
156
+ This bundles up your template and the lock file and uploads it to Coder. The
157
+ next time the template is used, Terraform will attempt to cache the specific
158
+ provider versions.
159
+
160
+ ### Cache directory
161
+
162
+ Coder will instruct Terraform to cache its downloaded providers in the
163
+ configured [ ` CODER_CACHE_DIRECTORY ` ] ( ../../reference/cli/server.md#--cache-dir )
164
+ directory.
165
+
166
+ Ensure that this directory is set to a location on disk which will persist
167
+ across restarts of Coder or
168
+ [ external provisioners] ( ../../admin/provisioners.md ) , if you're using them.
0 commit comments