@@ -9,14 +9,21 @@ support successful customer deployments.
9
9
Let's dive into the core concepts and terminology essential for understanding
10
10
Coder's architecture and deployment strategies.
11
11
12
- ## Glossary
12
+ ## General concepts
13
13
14
14
### Administrator
15
15
16
16
An administrator is a user role within the Coder platform with elevated
17
17
privileges. Admins have access to administrative functions such as user
18
18
management, template definitions, insights, and deployment configuration.
19
19
20
+ ### Coder
21
+
22
+ Coder, also known as _ coderd_ , is the main service recommended for deployment
23
+ with Kubernetes replicas to ensure high availability. It provides an API for
24
+ managing workspaces and templates. Each _ coderd_ replica has the capability to
25
+ host multiple provisioners (provisionerd).
26
+
20
27
### User
21
28
22
29
A user is an individual who utilizes the Coder platform to develop, test, and
@@ -95,10 +102,28 @@ users without slowing down. This process encompasses infrastructure setup,
95
102
traffic projections, and aggressive testing to identify and mitigate potential
96
103
bottlenecks.
97
104
105
+ In our scale tests, we adopt an approach with various stages to thoroughly
106
+ evaluate the system's performance. These stages include:
107
+
108
+ 1 . Prepare environment: create expected users and provision workspaces.
109
+
110
+ 2 . Dashboard evaluation: verify the responsiveness and stability of Coder
111
+ dashboards under varying load conditions. This is achieved by simulating user
112
+ interactions using instances of headless Chromium browsers.
113
+
114
+ 3 . SSH connections: establish user connections with agents, verifying their
115
+ ability to echo back received content.
116
+
117
+ 4 . Workspace application traffic: assess the handling of user connections with
118
+ specific workspace apps, confirming their capability to echo back received
119
+ content effectively.
120
+
121
+ 5 . Cleanup: clean used workspace resources.
122
+
98
123
### Infrastructure and setup requirements
99
124
100
- In a single workflow, the scale tests runner maintains a consistent load
101
- distribution as follows :
125
+ In a single workflow, the scale tests runner evenly spreads out the workload
126
+ like this :
102
127
103
128
- 80% of users open and utilize SSH connections.
104
129
- 25% of users connect to the workspace using the Web Terminal.
@@ -111,7 +136,7 @@ customers.
111
136
The basic setup of scale tests environment involves:
112
137
113
138
1 . Scale tests runner: ` c2d-standard-32 ` (32 vCPU, 128 GB RAM)
114
- 2 . Coderd : 2 replicas (4 vCPU, 16 GB RAM)
139
+ 2 . Coder : 2 replicas (4 vCPU, 16 GB RAM)
115
140
3 . Database: 1 replica (2 vCPU, 32 GB RAM)
116
141
4 . Provisioner: 50 instances (0.5 vCPU, 512 MB RAM)
117
142
@@ -123,9 +148,9 @@ In our scale tests, we simulate activity from 2000 users, 2000 workspaces, and
123
148
2000 agents, with metadata being sent 2 x every 10 s. Here are the resulting
124
149
metrics:
125
150
126
- Coderd :
151
+ Coder :
127
152
128
- - Median CPU usage for coderd : 3 vCPU, peaking at 3.7 vCPU during dashboard
153
+ - Median CPU usage for _ coderd _ : 3 vCPU, peaking at 3.7 vCPU during dashboard
129
154
tests.
130
155
- Median API request rate: 350 req/s during dashboard tests, 250 req/s during
131
156
Web Terminal and workspace apps tests.
@@ -140,4 +165,4 @@ Database:
140
165
141
166
- Median CPU utilization: 80%.
142
167
- Median memory utilization: 40%.
143
- - Average write_ops_count per minute: between 400 and 500 operations.
168
+ - ` write_ops_count ` per minute between 400 and 500 operations.
0 commit comments