Skip to content

Commit ab95ddd

Browse files
committed
1k 2k 3k
1 parent 17e5431 commit ab95ddd

File tree

4 files changed

+52
-14
lines changed

4 files changed

+52
-14
lines changed

docs/admin/architectures/1k-users.md

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,28 @@ tech startups, educational units, or small to mid-sized enterprises.
1616
| ----------- | ------------------- | -------- | --------------- | ---------- | ----------------- |
1717
| Up to 1,000 | 2 vCPU, 8 GB memory | 2 | `n1-standard-2` | `t3.large` | `Standard_D2s_v3` |
1818

19-
### Workspace nodes
19+
**Footnotes**:
2020

21-
TODO
21+
- For small deployments (ca. 100 users, 10 concurrent workspace builds), it is
22+
acceptable to deploy provisioners on `coderd` nodes.
2223

2324
### Provisioner nodes
2425

25-
TODO
26+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
27+
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
28+
| Up to 1,000 | 8 vCPU, 32 GB memory | 2 / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
29+
30+
**Footnotes**:
31+
32+
- An external provisioner is deployed as Kubernetes pod.
33+
34+
### Workspace nodes
35+
36+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
37+
| ----------- | -------------------- | ----------------------- | ---------------- | ------------ | ----------------- |
38+
| Up to 1,000 | 8 vCPU, 32 GB memory | 64 / 16 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
39+
40+
**Footnotes**:
41+
42+
- Assumed that a workspace user needs 2 GB memory to perform
43+
- Maximum number of Kubernetes workspace pods per node: 256

docs/admin/architectures/2k-users.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,13 +36,13 @@ enabling it for deployment reliability.
3636

3737
### Workspace nodes
3838

39-
| Users | Node capacity | Replicas | GCP | AWS | Azure |
40-
| ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
41-
| Up to 2,000 | 8 vCPU, 32 GB memory | 128 | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
39+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
40+
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
41+
| Up to 2,000 | 8 vCPU, 32 GB memory | 128 / 16 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
4242

4343
**Footnotes**:
4444

4545
- Assumed that a workspace user needs 2 GB memory to perform
46-
- Maximum number of Kubernetes pods per node: 256
46+
- Maximum number of Kubernetes workspace pods per node: 256
4747
- Nodes can be distributed in 2 regions, not necessarily evenly split, depending
4848
on developer team sizes

docs/admin/architectures/3k-users.md

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,29 @@ purposes.
1717
| ----------- | -------------------- | -------- | --------------- | ----------- | ----------------- |
1818
| Up to 3,000 | 8 vCPU, 32 GB memory | 4 | `n1-standard-4` | `t3.xlarge` | `Standard_D4s_v3` |
1919

20+
### Provisioner nodes
21+
22+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
23+
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
24+
| Up to 3,000 | 8 vCPU, 32 GB memory | 8 / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
25+
26+
**Footnotes**:
27+
28+
- An external provisioner is deployed as Kubernetes pod.
29+
- It is strongly discouraged to run provisioner daemons on `coderd` nodes.
30+
- Separate provisioners into different namespaces in favor of zero-trust or
31+
multi-cloud deployments.
32+
2033
### Workspace nodes
2134

22-
TODO
35+
| Users | Node capacity | Replicas | GCP | AWS | Azure |
36+
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
37+
| Up to 3,000 | 8 vCPU, 32 GB memory | 256 / 12 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
38+
39+
**Footnotes**:
2340

24-
Developers for up to 3000+ users architecture are also in an on-premises
25-
network. Document a provisioner running in a different cloud environment, and
26-
the zero-trust benefits of that.
41+
- Assumed that a workspace user needs 2 GB memory to perform
42+
- Maximum number of Kubernetes workspace pods per node: 256
43+
- As workspace nodes can be distributed between regions, on-premises networks
44+
and cloud areas, consider different namespaces in favor of zero-trust or
45+
multi-cloud deployments.

docs/admin/architectures/index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -270,12 +270,13 @@ for workspace users, administrators must be aware of a few assumptions.
270270
- Workspace pods run on the same Kubernetes cluster, but possible in a different
271271
namespace or a node pool.
272272
- Workspace limits (per workspace user):
273-
- Developers can choose between 4-8 vCPUs, and 4-16 GB memory.
274273
- Evaluate the workspace utilization pattern. For instance, a regular web
275274
development does not require high CPU capacity all the time, but only during
276275
project builds or load tests.
277-
- Minimum requirements for Coder agent running in an idle workspace are 0.1
278-
vCPU and 256 MB.
276+
- Evaluate minimal limits for single workspace. Include in the calculation
277+
requirements for Coder agent running in an idle workspace - 0.1 vCPU and 256
278+
MB. For instance, developers can choose between 0.5-8 vCPUs, and 1-16 GB
279+
memory.
279280

280281
#### Scaling formula
281282

0 commit comments

Comments
 (0)