Skip to content

Commit 4fc9ec7

Browse files
committed
Clarify template caching doc
Signed-off-by: Danny Kopping <danny@coder.com>
1 parent 886dcbe commit 4fc9ec7

File tree

1 file changed

+57
-36
lines changed

1 file changed

+57
-36
lines changed

docs/tutorials/best-practices/speed-up-templates.md

Lines changed: 57 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ potentially optimize within the template.
2121

2222
![Screenshot of a workspace and its build timeline](../../images/best-practice/build-timeline.png)
2323

24-
Adjust this request to match your Coder access URL and workspace:
24+
You can retrieve this detail programmatically from the API, as well:
2525

2626
```shell
2727
curl -X GET https://coder.example.com/api/v2/workspacebuilds/{workspacebuild}/timings \
@@ -36,9 +36,9 @@ for more information.
3636
### Coder Observability Chart
3737

3838
Use the [Observability Helm chart](https://github.com/coder/observability) for a
39-
pre-built set of dashboards to monitor your control plane over time. It includes
40-
Grafana, Prometheus, Loki, and Alert Manager out-of-the-box, and can be deployed
41-
on your existing Grafana instance.
39+
pre-built set of dashboards to monitor your Coder deployments over time. It
40+
includes pre-configured Grafana, Prometheus, Loki, and Alertmanager instances to
41+
ingest and display key observability data.
4242

4343
We recommend that all administrators deploying on Kubernetes or on an existing
4444
Prometheus or Grafana stack set the observability bundle up with the control
@@ -48,40 +48,40 @@ or our [Kubernetes installation guide](../../install/kubernetes.md).
4848

4949
### Enable Prometheus metrics for Coder
5050

51-
[Prometheus.io](https://prometheus.io/docs/introduction/overview/#what-is-prometheus)
52-
is included as part of the [observability chart](#coder-observability-chart). It
53-
offers a variety of
54-
[available metrics](../../admin/integrations/prometheus.md#available-metrics),
51+
Coder exposes a variety of
52+
[application metrics](../../admin/integrations/prometheus.md#available-metrics),
5553
such as `coderd_provisionerd_job_timings_seconds` and
56-
`coderd_agentstats_startup_script_seconds`, which measure how long the workspace
57-
takes to provision and how long the startup script takes.
54+
`coderd_agentstats_startup_script_seconds`, which measure how long the
55+
workspaces take to provision and how long the startup scripts take.
5856

59-
You can
60-
[install it separately](https://prometheus.io/docs/prometheus/latest/getting_started/)
61-
if you prefer.
57+
To make use of these metrics, you will need to
58+
[enable Prometheus metrics](../../admin/integrations/prometheus#enable-prometheus-metrics)
59+
exposition.
6260

6361
## Provisioners
6462

65-
`coder server` defaults to three provisioner daemons. Each provisioner daemon
66-
can handle one single job, such as start, stop, or delete at a time and can be
67-
resource intensive. When all provisioners are busy, workspaces enter a "pending"
68-
state until a provisioner becomes available.
63+
`coder server` by default provides three built-in provisioner daemons
64+
(controlled by the
65+
[`CODER_PROVISIONER_DAEMONS`](../../reference/cli/server#--provisioner-daemons)
66+
config option). Each provisioner daemon can handle one single job (such as
67+
start, stop, or delete) at a time and can be resource intensive. When all
68+
provisioners are busy, workspaces enter a "pending" state until a provisioner
69+
becomes available.
6970

7071
### Increase provisioner daemons
7172

7273
Provisioners are queue-based to reduce unpredictable load to the Coder server.
73-
However, they can be scaled up to allow more concurrent provisioners. You risk
74-
overloading the central Coder server if you use too many built-in provisioners,
75-
so we recommend a maximum of five provisioners. For more than five provisioners,
76-
we recommend that you move to
77-
[external provisioners](../../admin/provisioners.md).
78-
79-
If you can’t move to external provisioners, use the `provisioner-daemons` flag
80-
to increase the number of provisioner daemons to five:
81-
82-
```shell
83-
coder server --provisioner-daemons=5
84-
```
74+
If you require a higher bandwidth of provisioner jobs, you can do so by
75+
increasing the
76+
[`CODER_PROVISIONER_DAEMONS`](../../reference/cli/server#--provisioner-daemons)
77+
config option.
78+
79+
You risk overloading Coder if you use too many built-in provisioners, so we
80+
recommend a maximum of five built-in provisioners per `coderd` replica. For more
81+
than five provisioners, we recommend that you move to
82+
[External Provisioners](../../admin/provisioners.md) and also consider
83+
[High Availability](../../admin/networking/high-availability) to run multiple
84+
`coderd` replicas.
8585

8686
Visit the
8787
[CLI documentation](../../reference/cli/server.md#--provisioner-daemons) for
@@ -116,21 +116,28 @@ for more information.
116116

117117
## Set up Terraform provider caching
118118

119-
By default, Coder downloads each Terraform provider when a workspace starts.
120-
This can create unnecessary network and disk I/O.
119+
### Template lock file
120+
121+
On each workspace build, Terraform will examine the providers used by the
122+
template and attempt to download the latest version of each provider (unless
123+
constrained to a specific version). Terraform exposes a mechanism to build a
124+
static list of provider versions, which improves cacheability.
125+
126+
Without caching, Terraform will need to download each provider on each build,
127+
and this can create unnecessary network and disk I/O.
121128

122129
`terraform init` generates a `.terraform.lock.hcl` which instructs Coder
123130
provisioners to cache specific versions of your providers.
124131

125-
To use `terraform init` to cache providers:
132+
To use `terraform init` to build the static provider version list:
126133

127-
1. Pull the templates to your local device:
134+
1. Pull your template to your local device:
128135

129136
```shell
130-
coder templates pull
137+
coder templates pull <template>
131138
```
132139

133-
1. Run `terraform init` to initialize the directory:
140+
1. Run `terraform init` inside the template directory to build the lock file:
134141

135142
```shell
136143
terraform init
@@ -139,5 +146,19 @@ To use `terraform init` to cache providers:
139146
1. Push the templates back to your Coder deployment:
140147

141148
```shell
142-
coder templates push
149+
coder templates push <template>
143150
```
151+
152+
This will bundle up your template and the lock file and upload it to Coder. The
153+
next time the template is used, Terraform will attempt to cache the specific
154+
provider versions.
155+
156+
### Cache directory
157+
158+
Coder will instruct Terraform to cache its downloaded providers in the
159+
configured [`CODER_CACHE_DIRECTORY`](../../reference/cli/server#--cache-dir)
160+
directory.
161+
162+
Ensure that this directory is set to a location on disk which will persist
163+
across restarts of Coder (or
164+
[External Provisioners](../../admin/provisioners.md), if you're using them).

0 commit comments

Comments
 (0)