@@ -97,10 +97,47 @@ bottlenecks.
97
97
98
98
### Infrastructure and setup requirements
99
99
100
- TODO
100
+ In a single workflow, the scale tests runner maintains a consistent load
101
+ distribution as follows:
102
+
103
+ - 80% of users open and utilize SSH connections.
104
+ - 25% of users connect to the workspace using the Web Terminal.
105
+ - 40% of users simulate traffic for workspace apps.
106
+ - 20% of users interact with the Coder UI via a headless browser.
107
+
108
+ This distribution closely mirrors natural user behavior, as observed among our
109
+ customers.
110
+
111
+ The basic setup of scale tests environment involves:
112
+
113
+ 1 . Scale tests runner: ` c2d-standard-32 ` (32 vCPU, 128 GB RAM)
114
+ 2 . Coderd: 2 replicas (4 vCPU, 16 GB RAM)
115
+ 3 . Database: 1 replica (2 vCPU, 32 GB RAM)
116
+ 4 . Provisioner: 50 instances (0.5 vCPU, 512 MB RAM)
117
+
118
+ No pod restarts or internal errors were observed.
101
119
102
120
### Traffic Projections
103
121
104
- <!-- during scale tests -->
122
+ In our scale tests, we simulate activity from 2000 users, 2000 workspaces, and
123
+ 2000 agents, with metadata being sent 2 x every 10 s. Here are the resulting
124
+ metrics:
125
+
126
+ Coder:
127
+
128
+ - coderd: Median CPU usage for coderd: 3 vCPU, peaking at 3.7 vCPU during
129
+ dashboard tests.
130
+ - provisionerd: Median CPU usage is 0.35 vCPU during workspace provisioning.
131
+
132
+ API:
133
+
134
+ - Median request rate: 350 req/s during dashboard tests, 250 req/s during Web
135
+ Terminal and workspace apps tests.
136
+ - 2000 agent connections with latency: p90 at 60 ms, p95 at 220 ms.
137
+ - on average 2400 websocket connections during dashboard tests.
138
+
139
+ Database:
105
140
106
- TODO
141
+ - Median CPU utilization: 80%.
142
+ - Median memory utilization: 40%.
143
+ - Average write_ops_count per minute: between 400 and 500 operations.
0 commit comments