@@ -21,26 +21,14 @@ enabling it for deployment reliability.
21
21
| ----------- | -------------------- | -------- | --------------- | ----------- | ----------------- |
22
22
| Up to 2,000 | 4 vCPU, 16 GB memory | 2 | ` n1-standard-4 ` | ` t3.xlarge ` | ` Standard_D4s_v3 ` |
23
23
24
- ### Workspace nodes
25
-
26
- | Users | Node capacity | Replicas | GCP | AWS | Azure |
27
- | ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
28
- | Up to 2,000 | 8 vCPU, 32 GB memory | 2 | ` t2d-standard-8 ` | ` t3.2xlarge ` | ` Standard_D8s_v3 ` |
29
-
30
- TODO
31
-
32
- Max pods per node 256
33
-
34
- Developers for up to 2000+ users architecture are in 2 regions (a different
35
- cluster) and are evenly split. In practice, this doesn’t change much besides the
36
- diagram and workspaces node pool autoscaling config as it still uses the central
37
- provisioner. Recommend multiple provisioner groups for zero-trust and
38
- multi-cloud use cases.
39
-
40
24
### Provisioner nodes
41
25
42
26
TODO
43
27
28
+ In practice, this doesn’t change much besides the diagram and workspaces node
29
+ pool autoscaling config as it still uses the central provisioner. Recommend
30
+ multiple provisioner groups for zero-trust and multi-cloud use cases.
31
+
44
32
For example, to support 120 concurrent workspace builds:
45
33
46
34
- Create a cluster/nodepool with 4 nodes, 8-core each (AWS: ` t3.2xlarge ` GCP:
@@ -49,3 +37,19 @@ For example, to support 120 concurrent workspace builds:
49
37
(` CODER_PROVISIONER_DAEMONS=30 ` )
50
38
- Ensure Coder's [ PostgreSQL server] ( ./configure.md#postgresql-database ) can use
51
39
up to 2 cores and 4 GB RAM
40
+
41
+ ### Workspace nodes
42
+
43
+ | Users | Node capacity | Replicas | GCP | AWS | Azure |
44
+ | ----------- | -------------------- | -------- | ---------------- | ------------ | ----------------- |
45
+ | Up to 2,000 | 8 vCPU, 32 GB memory | 128 | ` t2d-standard-8 ` | ` t3.2xlarge ` | ` Standard_D8s_v3 ` |
46
+
47
+ ** Assumptions** :
48
+
49
+ - Workspace user needs 2 GB memory to perform
50
+
51
+ ** Footnotes** :
52
+
53
+ - Maximum number of Kubernetes pods per node: 256
54
+ - Nodes can be distributed in 2 regions, not necessarily evenly split, depending
55
+ on developer team sizes
0 commit comments