Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat: add provisioner job hang detector #7927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add provisioner job hang detector #7927
Changes from 1 commit
a38e949
dfaf836
530e8f2
027443a
0218151
9e0ae3b
590f76a
6f1e127
e284b47
0b9e78a
8f16c3b
f25938a
aa36a0d
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to ask a (perhaps stupid) question since we're relying on exit/hung timeouts to be predictable.
Do we have anyway to ensure that this specific context is cancelled in the event that heartbeats or updates are hanging/failing/timing out. Let's say network conditions are such that the stream doesn't die and this stream context remains open, but provisioner heartbeats to coderd are not coming through (perhaps stream writes simply hang).
Or, let's say it takes 3 minutes longer for this context to be cancelled than what hang detector is expecting. We would then be waiting 3 + 3 minutes and thus still potentially be canceling (SIGINT) the terraform apply for a minute after the job is marked as terminated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a 30 second timeout to updates, and failed heartbeats will cause the stream context to be canceled which should result in graceful cancellation starting immediately