-
Notifications
You must be signed in to change notification settings - Fork 887
feat: show devcontainer dirty status and allow recreate #17880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: show devcontainer dirty status and allow recreate #17880
Conversation
ce63f79
to
c8a5bb8
Compare
c8a5bb8
to
2e133c2
Compare
@@ -146,6 +150,136 @@ func (w *fakeWatcher) sendEventWaitNextCalled(ctx context.Context, event fsnotif | |||
func TestAPI(t *testing.T) { | |||
t.Parallel() | |||
|
|||
// List tests the API.getContainers method using a mock | |||
// implementation. It specifically tests caching behavior. | |||
t.Run("List", func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was moved from internal to here. In the process we had to create WithCacheDuration and change the implementation from calling internal getContainers
to using the API endpoint (which seems reasonable). Otherwise the test remains unchanged.
// DevcontainerDirty is true if the devcontainer configuration has changed | ||
// since the container was created. This is used to determine if the | ||
// container needs to be rebuilt. | ||
DevcontainerDirty bool `json:"devcontainer_dirty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: reduce stuttering in name
DevcontainerDirty bool `json:"devcontainer_dirty"` | |
Dirty bool `json:"dirty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the container structure, so I wouldn’t consider this stutter. A container being dirty isn’t very informative. I’m sure we want to consolidate/simplify the API a bit down the line so this could change later.
err = agentConn.RecreateDevcontainer(ctx, container) | ||
if err != nil { | ||
if errors.Is(err, context.Canceled) { | ||
httpapi.Write(ctx, rw, http.StatusRequestTimeout, codersdk.Response{ | ||
Message: "Failed to recreate devcontainer from agent.", | ||
Detail: "Request timed out.", | ||
}) | ||
return | ||
} | ||
// If the agent returns a codersdk.Error, we can return that directly. | ||
if cerr, ok := codersdk.AsError(err); ok { | ||
httpapi.Write(ctx, rw, cerr.StatusCode(), cerr.Response) | ||
return | ||
} | ||
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{ | ||
Message: "Internal error recreating devcontainer.", | ||
Detail: err.Error(), | ||
}) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we're halfway through recreating the devcontainer and the request gets canceled? Will the devcontainer eventually be in a running state, or will it be left in limbo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good observation, although this is not relevant to this endpoint, but rather the agentcontainers API. For now the devcontainer CLI command will be interrupted. I’m planning to change this behavior to be a ”job accepted” rather than request scoped. But this’ll come with refactoring the service to monitor (dev)containers and workers for asynchronous tasks.
With this change, the backend portion of #16424 is done.
What remains is exposing a UI button if a devcontainer has changed (dirty) and allowing it to hit the new route. Logs will be streamed in agent logs.
Updates #16424