Skip to content

Commit 4721204

Browse files
committed
WIP
1 parent 1a4dfb9 commit 4721204

File tree

2 files changed

+34
-18
lines changed

2 files changed

+34
-18
lines changed

docs/admin/architectures/2k-users.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -21,26 +21,14 @@ enabling it for deployment reliability.
2121
| ----------- | -------------------- | -------- | --------------- | ----------- | ----------------- |
2222
| Up to 2,000 | 4 vCPU, 16 GB memory | 2 | `n1-standard-4` | `t3.xlarge` | `Standard_D4s_v3` |
2323

24-
### Workspace nodes
25-
26-
| Users | Node capacity | Replicas | GCP | AWS | Azure |
27-
| ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
28-
| Up to 2,000 | 8 vCPU, 32 GB memory | 2 | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
29-
30-
TODO
31-
32-
Max pods per node 256
33-
34-
Developers for up to 2000+ users architecture are in 2 regions (a different
35-
cluster) and are evenly split. In practice, this doesn’t change much besides the
36-
diagram and workspaces node pool autoscaling config as it still uses the central
37-
provisioner. Recommend multiple provisioner groups for zero-trust and
38-
multi-cloud use cases.
39-
4024
### Provisioner nodes
4125

4226
TODO
4327

28+
In practice, this doesn’t change much besides the diagram and workspaces node
29+
pool autoscaling config as it still uses the central provisioner. Recommend
30+
multiple provisioner groups for zero-trust and multi-cloud use cases.
31+
4432
For example, to support 120 concurrent workspace builds:
4533

4634
- Create a cluster/nodepool with 4 nodes, 8-core each (AWS: `t3.2xlarge` GCP:
@@ -49,3 +37,19 @@ For example, to support 120 concurrent workspace builds:
4937
(`CODER_PROVISIONER_DAEMONS=30`)
5038
- Ensure Coder's [PostgreSQL server](./configure.md#postgresql-database) can use
5139
up to 2 cores and 4 GB RAM
40+
41+
### Workspace nodes
42+
43+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
44+
| ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
45+
| Up to 2,000 | 8 vCPU, 32 GB memory | 128 | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
46+
47+
**Assumptions**:
48+
49+
- Workspace user needs 2 GB memory to perform
50+
51+
**Footnotes**:
52+
53+
- Maximum number of Kubernetes pods per node: 256
54+
- Nodes can be distributed in 2 regions, not necessarily evenly split, depending
55+
on developer team sizes

docs/admin/architectures/index.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ Database:
182182

183183
## Hardware recommendation
184184

185-
### Control plane
185+
### Control plane: coderd
186186

187187
To ensure stability and reliability of the Coder control plane, it's essential
188188
to focus on node sizing, resource limits, and the number of replicas. We
@@ -234,7 +234,19 @@ We recommend disabling the autoscaling for `coderd` nodes. Autoscaling can cause
234234
interruptions for user connections, see [Autoscaling](../scale.md#autoscaling)
235235
for more details.
236236

237-
### Workspaces
237+
### Control plane: provisionerd
238+
239+
TODO
240+
241+
#### Scaling formula
242+
243+
TODO
244+
245+
**Node Autoscaling**
246+
247+
TODO
248+
249+
### Data plane: Workspaces
238250

239251
To determine workspace resource limits and keep the best developer experience
240252
for workspace users, administrators must be aware of a few assumptions.

0 commit comments

Comments
 (0)