Skip to content

Commit 233866f

Browse files
committed
2k
1 parent 4721204 commit 233866f

File tree

2 files changed

+30
-19
lines changed

2 files changed

+30
-19
lines changed

docs/admin/architectures/2k-users.md

Lines changed: 9 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -23,33 +23,26 @@ enabling it for deployment reliability.
2323

2424
### Provisioner nodes
2525

26-
TODO
26+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
27+
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
28+
| Up to 2,000 | 8 vCPU, 32 GB memory | 4 / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
2729

28-
In practice, this doesn’t change much besides the diagram and workspaces node
29-
pool autoscaling config as it still uses the central provisioner. Recommend
30-
multiple provisioner groups for zero-trust and multi-cloud use cases.
31-
32-
For example, to support 120 concurrent workspace builds:
30+
**Footnotes**:
3331

34-
- Create a cluster/nodepool with 4 nodes, 8-core each (AWS: `t3.2xlarge` GCP:
35-
`e2-highcpu-8`)
36-
- Run coderd with 4 replicas, 30 provisioner daemons each.
37-
(`CODER_PROVISIONER_DAEMONS=30`)
38-
- Ensure Coder's [PostgreSQL server](./configure.md#postgresql-database) can use
39-
up to 2 cores and 4 GB RAM
32+
- An external provisioner is deployed as Kubernetes pod.
33+
- It is not recommended to run provisioner daemons on `coderd` nodes.
34+
- Consider separating provisioners into different namespaces in favor of
35+
zero-trust or multi-cloud deployments.
4036

4137
### Workspace nodes
4238

4339
| Users | Node capacity | Replicas | GCP | AWS | Azure |
4440
| ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
4541
| Up to 2,000 | 8 vCPU, 32 GB memory | 128 | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
4642

47-
**Assumptions**:
48-
49-
- Workspace user needs 2 GB memory to perform
50-
5143
**Footnotes**:
5244

45+
- Assumed that a workspace user needs 2 GB memory to perform
5346
- Maximum number of Kubernetes pods per node: 256
5447
- Nodes can be distributed in 2 regions, not necessarily evenly split, depending
5548
on developer team sizes

docs/admin/architectures/index.md

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -236,15 +236,31 @@ for more details.
236236

237237
### Control plane: provisionerd
238238

239-
TODO
239+
Each provisioner can run a single concurrent workspace build. For example,
240+
running 10 provisioner containers will allow 10 users to start workspaces at the
241+
same time.
242+
243+
By default, the Coder server runs built-in provisioner daemons, but the
244+
_Enterprise_ Coder release allows for running external provisioners to separate
245+
the load caused by workspace provisioning on the `coderd` nodes.
240246

241247
#### Scaling formula
242248

243-
TODO
249+
When determining scaling requirements, consider the following factors:
250+
251+
- `0.5 vCPU x 512 MB memory x concurrent workspace build`: A formula to
252+
determine resource allocation based on the number of concurrent workspace
253+
builds, and standard complexity of a Terraform template. _The rule of thumb_:
254+
the more provisioners are free/available, the more concurrent workspace builds
255+
can be performed.
244256

245257
**Node Autoscaling**
246258

247-
TODO
259+
Autoscaling provisioners is not an easy problem to solve unless it can be
260+
predicted when a number of concurrent workspace builds increases.
261+
262+
We recommend disabling autoscaling and adjusting the number of provisioners to
263+
developer needs based on the workspace build queuing time.
248264

249265
### Data plane: Workspaces
250266

@@ -279,6 +295,8 @@ ongoing workspaces
279295

280296
### Database
281297

298+
TODO
299+
282300
PostgreSQL database
283301

284302
measure and document the impact of dbcrypt

0 commit comments

Comments
 (0)