Skip to content

feat: reinitialize agents when a prebuilt workspace is claimed #17475

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
c09c9b9
WIP: agent reinitialization
SasSwart Apr 21, 2025
476fe71
fix assignment to nil map
SasSwart Apr 21, 2025
8c8bca6
fix: ensure prebuilt workspace agent tokens are reused when a prebuil…
SasSwart Apr 23, 2025
7ce4eea
test agent reinitialization
SasSwart Apr 24, 2025
52ac64e
remove defunct metric
SasSwart Apr 24, 2025
362db7c
Remove todo
SasSwart Apr 25, 2025
dcc7379
test that we trigger workspace agent reinitialization under the right…
SasSwart Apr 28, 2025
ff66b3f
slight improvements to a test
SasSwart Apr 28, 2025
efff5d9
review notes to improve legibility
SasSwart Apr 28, 2025
cebd5db
add an integration test for prebuilt workspace agent reinitialization
SasSwart Apr 29, 2025
2679138
Merge remote-tracking branch 'origin/main' into jjs/prebuilds-agent-r…
SasSwart Apr 29, 2025
9feebef
enable the premium license in a prebuilds integration test
SasSwart Apr 29, 2025
b117b5c
encapsulate WaitForReinitLoop for easier testing
SasSwart Apr 30, 2025
a22b414
introduce unit testable abstraction layers
SasSwart Apr 30, 2025
9bbd2c7
test workspace claim pubsub
SasSwart May 1, 2025
5804201
add tests for agent reinitialization
SasSwart May 1, 2025
7e8dcee
review notes
SasSwart May 1, 2025
725f97b
Merge remote-tracking branch 'origin/main' into jjs/prebuilds-agent-r…
SasSwart May 1, 2025
a9b1567
make fmt lint
SasSwart May 1, 2025
21ee970
remove go mod replace
SasSwart May 1, 2025
e54d7e7
remove defunct logging
SasSwart May 1, 2025
2799858
update dependency on terraform-provider-coder
SasSwart May 2, 2025
1d93003
update dependency on terraform-provider-coder
SasSwart May 2, 2025
763fc12
go mod tidy
SasSwart May 2, 2025
0f879c7
make -B gen
SasSwart May 2, 2025
61784c9
dont require ids to InsertPresetParameters
SasSwart May 2, 2025
604eb27
dont require ids to InsertPresetParameters
SasSwart May 2, 2025
bf4d2cf
fix: set the running agent token
dannykopping May 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
encapsulate WaitForReinitLoop for easier testing
  • Loading branch information
SasSwart committed Apr 30, 2025
commit b117b5c34d9eb9ee26fca644c24173fc94da7036
25 changes: 1 addition & 24 deletions cli/agent.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ import (
"golang.org/x/xerrors"
"gopkg.in/natefinch/lumberjack.v2"

"github.com/coder/retry"

"github.com/prometheus/client_golang/prometheus"

"cdr.dev/slog"
Expand Down Expand Up @@ -332,27 +330,7 @@ func (r *RootCmd) workspaceAgent() *serpent.Command {
containerLister = agentcontainers.NewDocker(execer)
}

// TODO: timeout ok?
reinitCtx, reinitCancel := context.WithTimeout(context.Background(), time.Hour*24)
defer reinitCancel()
reinitEvents := make(chan agentsdk.ReinitializationEvent)

go func() {
// Retry to wait for reinit, main context cancels the retrier.
for retrier := retry.New(100*time.Millisecond, 10*time.Second); retrier.Wait(ctx); {
select {
case <-reinitCtx.Done():
return
default:
}

err := client.WaitForReinit(reinitCtx, reinitEvents)
if err != nil {
logger.Error(ctx, "failed to wait for reinit instructions, will retry", slog.Error(err))
}
}
}()

reinitEvents := agentsdk.WaitForReinitLoop(ctx, logger, client)
var (
lastErr error
mustExit bool
Expand Down Expand Up @@ -409,7 +387,6 @@ func (r *RootCmd) workspaceAgent() *serpent.Command {
prometheusSrvClose()

if mustExit {
reinitCancel()
break
}

Expand Down
22 changes: 11 additions & 11 deletions coderd/apidoc/docs.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 7 additions & 7 deletions coderd/apidoc/swagger.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

54 changes: 45 additions & 9 deletions codersdk/agentsdk/agentsdk.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
"tailscale.com/tailcfg"

"cdr.dev/slog"
"github.com/coder/retry"
"github.com/coder/websocket"

"github.com/coder/coder/v2/agent/proto"
Expand Down Expand Up @@ -707,49 +708,84 @@ func PrebuildClaimedChannel(id uuid.UUID) string {
// - ping: ignored, keepalive
// - prebuild claimed: a prebuilt workspace is claimed, so the agent must reinitialize.
// NOTE: the caller is responsible for closing the events chan.
func (c *Client) WaitForReinit(ctx context.Context, events chan<- ReinitializationEvent) error {
func (c *Client) WaitForReinit(ctx context.Context) (*ReinitializationEvent, error) {
// TODO: allow configuring httpclient
c.SDK.HTTPClient.Timeout = time.Hour * 24
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this essentially place an upper limit of 24 hours on the maximum lifetime of a prebuild?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is being worked on currently; disregard for the moment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is a 24 hour timeout desirable? A prebuilt workspace could remain up without a reinit for much longer than this, no?

Also, this is a quiet side effect to the client: leaving it in a different state than it started in.


// TODO (sasswart): tried the following to fix the above, it won't work. The shorter timeout wins.
// I also considered cloning c.SDK.HTTPClient and setting the timeout on the cloned client.
// That won't work because we can't pass the cloned HTTPClient into s.SDK.Request.
// Looks like we're going to need a separate client to be able to have a longer timeout.
//
// timeoutCtx, cancelTimeoutCtx := context.WithTimeout(ctx, 24*time.Hour)
// defer cancelTimeoutCtx()

res, err := c.SDK.Request(ctx, http.MethodGet, "/api/v2/workspaceagents/me/reinit", nil)
if err != nil {
return xerrors.Errorf("execute request: %w", err)
return nil, xerrors.Errorf("execute request: %w", err)
}
defer res.Body.Close()

if res.StatusCode != http.StatusOK {
return codersdk.ReadBodyAsError(res)
return nil, codersdk.ReadBodyAsError(res)
}

nextEvent := codersdk.ServerSentEventReader(ctx, res.Body)

for {
// TODO (Sasswart): I don't like that we do this select at the start and at the end.
// nextEvent should return an error if the context is canceled, but that feels like a larger refactor.
// if it did, we'd only have the select at the end of the loop.
select {
case <-ctx.Done():
return ctx.Err()
return nil, ctx.Err()
default:
}

sse, err := nextEvent()
if err != nil {
return xerrors.Errorf("failed to read server-sent event: %w", err)
return nil, xerrors.Errorf("failed to read server-sent event: %w", err)
}
if sse.Type != codersdk.ServerSentEventTypeData {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignoring Ping seems OK, but probably shouldn't ignore errors.

continue
}
var reinitEvent ReinitializationEvent
b, ok := sse.Data.([]byte)
if !ok {
return xerrors.Errorf("expected data as []byte, got %T", sse.Data)
return nil, xerrors.Errorf("expected data as []byte, got %T", sse.Data)
}
err = json.Unmarshal(b, &reinitEvent)
if err != nil {
return xerrors.Errorf("unmarshal reinit response: %w", err)
return nil, xerrors.Errorf("unmarshal reinit response: %w", err)
}
select {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point you've got the event; no reason to check the context for cancelation again, just return it.

case <-ctx.Done():
return ctx.Err()
case events <- reinitEvent:
return nil, ctx.Err()
default:
return &reinitEvent, nil
}
}
}

func WaitForReinitLoop(ctx context.Context, logger slog.Logger, client *Client) <-chan ReinitializationEvent {
reinitEvents := make(chan ReinitializationEvent)

go func() {
for retrier := retry.New(100*time.Millisecond, 10*time.Second); retrier.Wait(ctx); {
logger.Debug(ctx, "waiting for agent reinitialization instructions")
reinitEvent, err := client.WaitForReinit(ctx)
if err != nil {
logger.Error(ctx, "failed to wait for agent reinitialization instructions", slog.Error(err))
}
reinitEvents <- *reinitEvent
select {
case <-ctx.Done():
close(reinitEvents)
return
case reinitEvents <- *reinitEvent:
}
}
}()

return reinitEvents
}
Loading