@@ -250,7 +250,7 @@ When determining scaling requirements, consider the following factors:
250
250
251
251
- ` 1 vCPU x 1 GB memory x 2 concurrent workspace build ` : A formula to determine
252
252
resource allocation based on the number of concurrent workspace builds, and
253
- standard complexity of a Terraform template. _ The rule of thumb_ : the more
253
+ standard complexity of a Terraform template. _ Rule of thumb_ : the more
254
254
provisioners are free/available, the more concurrent workspace builds can be
255
255
performed.
256
256
@@ -280,13 +280,25 @@ for workspace users, administrators must be aware of a few assumptions.
280
280
281
281
#### Scaling formula
282
282
283
- TODO
283
+ When determining scaling requirements, consider the following factors:
284
+
285
+ - ` 1 vCPU x 2 GB memory x 1 workspace ` : A formula to determine resource
286
+ allocation based on the minimal requirements for an idle workspace with a
287
+ running Coder agent and occasional CPU and memory bursts for building
288
+ projects.
289
+
290
+ ** Node Autoscaling**
291
+
292
+ Workspace nodes can be set to operate in autoscaling mode to mitigate the risk
293
+ of prolonged high resource utilization.
284
294
285
- - Guidance for reasonable ratio of CPU limits/requests
286
- - Guidance for reasonable ratio for memory requests/limits
295
+ One approach is to scale up workspace nodes when total CPU usage or memory
296
+ consumption reaches 80%. Another option is to scale based on metrics such as the
297
+ number of workspaces or active users. It's important to note that as new users
298
+ onboard, the autoscaling configuration should account for ongoing workspaces.
287
299
288
- Mention that as users onboard, the autoscaling config should take care of
289
- ongoing workspaces
300
+ Scaling down workspace nodes to zero is not recommended, as it will result in
301
+ longer wait times for workspace provisioning by users.
290
302
291
303
### Database
292
304
0 commit comments