-
Notifications
You must be signed in to change notification settings - Fork 875
fix: make GetWorkspacesEligibleForTransition return less false-positives #15429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take another look through as I haven't looked at autostop behavior in a long time and need to wrap my head around it a bit more. I do have concerns there are cases we won't pick up jobs we should or that there are ways to abuse the system.
Let's say I'm a nefarious user and start a bitcoin miner in a tmux of my workspace. Then I hit stop + cancel immediately afterwards. Now my workspace is in a cancelled state, the previous job was stop and I don't think it'll be picked up by any condition here and thus my bitcoin miner remains running in perpetuity. There may be other similar cases and I'm not sure what should happen in these, but I could also be wrong.
PS. Very nice job on reducing the DB load through better filters 👍🏻
-- If the workspace's template has an inactivity_ttl set | ||
-- it may be eligible for dormancy. | ||
-- A workspace may be eligible for dormant stop if the following are true: | ||
-- * The workspace is not dormant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who/when marks a workspace as dormant? Is there a chance the workspace can be marked dormant and we never trigger either autostop or dormant stop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Autobuild executor does it here:
coder/coderd/autobuild/lifecycle_executor.go
Lines 497 to 506 in 7b33ab0
// isEligibleForDormantStop returns true if the workspace should be dormant | |
// for breaching the inactivity threshold of the template. | |
func isEligibleForDormantStop(ws database.Workspace, templateSchedule schedule.TemplateScheduleOptions, currentTick time.Time) bool { | |
// Only attempt against workspaces not already dormant. | |
return !ws.DormantAt.Valid && | |
// The template must specify an time_til_dormant value. | |
templateSchedule.TimeTilDormant > 0 && | |
// The workspace must breach the time_til_dormant value. | |
currentTick.Sub(ws.LastUsedAt) > templateSchedule.TimeTilDormant | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, that plus this seems relevant:
coder/coderd/autobuild/lifecycle_executor.go
Lines 421 to 428 in 7b33ab0
case isEligibleForDormantStop(ws, templateSchedule, currentTick): | |
// Only stop started workspaces. | |
if latestBuild.Transition == database.WorkspaceTransitionStart { | |
return database.WorkspaceTransitionStop, database.BuildReasonDormancy, nil | |
} | |
// We shouldn't transition the workspace but we should still | |
// make it dormant. | |
return "", database.BuildReasonDormancy, nil |
As does this:
coder/coderd/autobuild/lifecycle_executor.go
Lines 254 to 264 in 7b33ab0
// Transition the workspace to dormant if it has breached the template's | |
// threshold for inactivity. | |
if reason == database.BuildReasonDormancy { | |
wsOld := ws | |
wsNew, err := tx.UpdateWorkspaceDormantDeletingAt(e.ctx, database.UpdateWorkspaceDormantDeletingAtParams{ | |
ID: ws.ID, | |
DormantAt: sql.NullTime{ | |
Time: dbtime.Now(), | |
Valid: true, | |
}, | |
}) |
So one thing that can happen at least in the code-version is that a workspace has remained in a failed stop state for a long time. Then it becomes eligible for dormant stop but since it wasn't a start, we don't try to stop it but mark it as dormant anyway.
Then we'll have a workspace marked dormant that we never attempted to stop.
templates.time_til_dormant > 0 AND | ||
workspaces.dormant_at IS NULL | ||
(@now :: timestamptz) - workspaces.last_used_at > (INTERVAL '1 millisecond' * (templates.time_til_dormant / 1000000)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realize this is probably already in use elsewhere but not going to lie, it's weird seeing nanos being converted into millis. I know ms is the smallest unit in pg but feels like this isn't super intuitive for humans to understand what's going on here.
Obviously storing ns in the db is the main problem (and we're not fixing that here). But I'd prefer to see the conversion into seconds instead of ms as that feels like a more intuitive unit.
Even the db column for time_til_dormant
doesn't have a comment so this is pretty much hidden magic 😄.
Not a blocker, but we should probably do something about this at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every day I regret my previous decision 😓
-- If the user account is suspended, and the workspace is running. | ||
-- A workspace may be eligible for failed stop if the following are true: | ||
-- * The template has a failure ttl set. | ||
-- * The workspace build was a start transition. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we failed to autostop previously, then this limitation will prevent this case from triggering and I think we'll never retry it because of the other cases. Is that as intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be the same logic that's done later on? We're not changing any logic here, just front-loading it.
coder/coderd/autobuild/lifecycle_executor.go
Lines 525 to 536 in 7b33ab0
// isEligibleForFailedStop returns true if the workspace is eligible to be stopped | |
// due to a failed build. | |
func isEligibleForFailedStop(build database.WorkspaceBuild, job database.ProvisionerJob, templateSchedule schedule.TemplateScheduleOptions, currentTick time.Time) bool { | |
// If the template has specified a failure TLL. | |
return templateSchedule.FailureTTL > 0 && | |
// And the job resulted in failure. | |
job.JobStatus == database.ProvisionerJobStatusFailed && | |
build.Transition == database.WorkspaceTransitionStart && | |
// And sufficient time has elapsed since the job has completed. | |
job.CompletedAt.Valid && | |
currentTick.Sub(job.CompletedAt.Time) > templateSchedule.FailureTTL | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't look at the original logic, but then I suppose I'm proposing that the original logic doesn't take this into account?
Here, unless the failed build is "start", we'll never try to stop it. The failed build could just as well be a cancelled (or failed) "stop" which has left resources behind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good catch. We might want to adjust this behaviour in a separate PR.
EDIT: filed #15477
@johnstcn and I spotted, whilst doing some testing yesterday, that we're still returning false-positives in the following scenario:
(
users.status = 'active'::user_status AND
provisioner_jobs.job_status != 'failed'::provisioner_job_status AND
workspace_builds.transition = 'stop'::workspace_transition AND
workspaces.autostart_schedule IS NOT NULL
) What is happening here is that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mafredri Are you happy to merge the changes to this query as-is for now and investigate later improvements in a follow-up PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johnstcn I worry about either going stale now that we have logic duplicated both in code an query. Is there a reason to keep both?
Just thinking out loud but right now the conditions are lost since they're part of the WHERE
. But if we moved them to SELECT
instead they could be utilized as booleans in the code.
But aside from the duplication, and considering we're tracking additional state handling in #15477, I have no further objections. Feel free to merge.
…ositives (#15594) Relates to #15082 Further to #15429, this reduces the amount of false-positives returned by the 'is eligible for autostart' part of the query. We achieve this by calculating the 'next start at' time of the workspace, storing it in the database, and using it in our `GetWorkspacesEligibleForTransition` query. The prior implementation of the 'is eligible for autostart' query would return _all_ workspaces that at some point in the future _might_ be eligible for autostart. This now ensures we only return workspaces that _should_ be eligible for autostart. We also now pass `currentTick` instead of `t` to the `GetWorkspacesEligibleForTransition` query as otherwise we'll have one round of workspaces that are skipped by `isEligibleForTransition` due to `currentTick` being a truncated version of `t`.
Relates to #15082
The old implementation of
GetWorkspacesEligibleForTransition
returns many workspaces that are not actually eligible for transition. This new implementation reduces this number significantly (at least on our dogfood instance).For each workspace returned from
GetWorkspacesEligibleForTransition
, we make (at minimum) 7 calls to the database. That means for our dogfood instance, we currently make roughly 700 DB calls a minute for workspaces that are not eligible for transition. The new implementation cuts this down to <50DB calls a minute (a reduction of around 90%).Looking at the planning/execution time of the new query, it appears to be either within run-to-run variance or a little quicker.
Before:
https://explain.dalibo.com/plan/95f6e257d8fg51gd
After:
https://explain.dalibo.com/plan/h2cb9hc533cac0c7