-
Notifications
You must be signed in to change notification settings - Fork 887
docs: scaling Coder #5206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: scaling Coder #5206
Conversation
Co-authored-by: Dean Sheather <dean@deansheather.com>
Co-authored-by: Dean Sheather <dean@deansheather.com>
Co-authored-by: Dean Sheather <dean@deansheather.com>
| Environment | Users | Last tested | Status | | ||
| ------------------------------------------------- | ------------- | ------------ | -------- | | ||
| [Google Kubernetes Engine (GKE)](./gke.md) | 50, 100, 1000 | Nov 29, 2022 | Complete | | ||
| [AWS Elastic Kubernetes Service (EKS)](./eks.md) | 50, 100, 1000 | Nov 29, 2022 | Complete | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🐛 [✖] ./eks.md → Status: 400
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep - i wanted to get a review on the GKE format before duplicating it. ideally, we have a way of generating these though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for drafting the document, @bpmct! My recommendation is to work on providing more numbers confirming our success (Completed
), but it's a good starting point!
@@ -0,0 +1,36 @@ | |||
We regularly scale-test Coder against various reference architectures. Additionally, we provide a [scale testing utility](#scaletest-utility) which can be used in your own environment to give insight on how Coder scales with your deployment's specific templates, images, etc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We regularly scale-test Coder
As a customer, I'd like to reproduce Coder's results, but the doc doesn't mention the used release version. It might be good to keep the version and date of performing load tests.
|
||
For deployments with 100+ users, we recommend running the Coder server in a separate node pool via taints, tolerations, and nodeselectors. | ||
|
||
### Cluster configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're using Terraform or GDM for spawning these machines, can we please share and link it here? It might be easier for customers to bring up their own clusters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, we're doing it manually (or internal Terraform configs that are not ready for the public), but agree we should eventually provide the Terraform config for many of these environments.
I'd hate for Terraform to be a prerequisite for us testing a specific environment (e.g. OpenShift, DigitalOcean), but agree that it's highly beneficial for reproducibility and so customers could quickly spin up clusters
- **Node pools** | ||
- Coder server | ||
- **Instance type**: `e2-highcpu-4` | ||
- **Operating system**: `Ubuntu with containerd` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ubuntu version is missing. We could also refer to the VM image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
| Environment | Users | Last tested | Status | | ||
| ------------------------------------------------- | ------------- | ------------ | -------- | | ||
| [Google Kubernetes Engine (GKE)](./gke.md) | 50, 100, 1000 | Nov 29, 2022 | Complete | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete
It would be awesome if we can share more details here picturing the autoscaling behavior and the duration of tests.
BTW this data can be easily converted into a set of blog posts about loadtesting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete
Also, some reference data about API latencies might be helpful also for us, so that we know if the Coder performance improved or decreased over time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, Test "Complete" is a very weak statement. Like, we could totally fail the test and still say, well the test is complete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was inspired by GitLab's Performance and Stability page. I wasn't sure the best way, in a table view, to show that we've validated a Coder deployment with n
users, but agree that complete isn't the best term.
Perhaps Validated
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The column could also be omitted and we put a ✅ or ⌛ next to each user count. I'm not exactly sure the best format for this information at the moment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume that we could at least add a legend: green-yellow-red
green: everything went smooth 👍 SLAs (do we have any?) not impacted, platform performance not degraded
yellow: users can operate, but in a few cases we observed SLA being violated, for instance - due to high latency. We should describe specifically what went wrong
red: total disaster, platform misbehaves, not usable, etc.
In general, it would be awesome if we can automatically raise a Github issue for every performance test run and discuss the results there. BTW this is a good moment to "build" the SRE attitude in Coder :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The green yellow red system is a bit overkill for what we need right now. As we develop out our tests and automation we can start using it but we're nowhere near there yet. We also don't have any SLAs or criteria to base a yellow on yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like an action item for me 👍
- CPU: `2 cores` | ||
- RAM: `4 GB` | ||
|
||
## 100 users |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Index.md claims that we tested at 50, 100, and 1000 users, but this doc only has 50, 100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yea we should disregard the numbers and instance types at the moment. these are all placeholder-ish to align on format and info we want to display
@mtojek - do you mean test cases beyond the number of workspaces, or benchmarks (e.g. time to complete)? Some examples would be helpful, even if you don't consider them a blocker to merging this first PR. |
- return results (e.g. `99 succeeded, 1 failed to connect`) | ||
|
||
```sh | ||
coder scaletest create-workspaces \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this what we want the command to be like, or what it currently is?
Just tried to look this up, and I couldn't find scaletest
. What I did find was loadtest
but that takes a config file, not flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is blocked by #5202. I'll be sure to update the schema to whatever it changes to prior to merging
I'm always in favor of quick iterations and having deliverables as soon as possible, so not a blocker at all! As an enterprise DevOps persona, I would like to read about following aspects of load testing before considering the platform:
IMHO We need to prove that we know what we're doing and that we control the platform. It would be vague if we just post the word "Completed" without any result interpretation. We don't need to interpret them every time we run tests, but we should do document scenarios at least once. As I said, we don't need to work on those items at the moment, but it would be great to prepare a roadmap plan for long-term scale tests. |
The test does the following: | ||
|
||
- create `n` workspaces | ||
- establish SSH connection to each workspace | ||
- run `sleep 3 && echo hello` on each workspace via the web terminal | ||
- close connections, attempt to delete all workspaces | ||
- return results (e.g. `99 succeeded, 1 failed to connect`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should document what test is run inside each environment/architecture. Similar to GitLab's
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also do this via graphs (e.g. "workspaces created, etc")
we can also add SQL sizing recommendations |
This Pull Request is becoming stale. In order to minimize WIP, prevent merge conflicts and keep the tracker readable, I'm going close to this PR in 3 days if there isn't more activity. |
No description provided.