Skip to content

What to do when workspace is state-bricked? #2256

Closed
@ammario

Description

@ammario

So, my workspace state has become misaligned with infrastructure. It looks like this in the logs:

14:02:06.474
Plan: 1 to add, 0 to change, 0 to destroy.
14:02:06.474
Plan: 1 to add, 0 to change, 0 to destroy.
14:02:06.873
kubernetes_deployment.coder[0]: Creating...
14:02:06.873
kubernetes_deployment.coder[0]: Creating...
14:02:06.919
kubernetes_deployment.coder[0]: Creation errored after 0s
14:02:06.919
kubernetes_deployment.coder[0]: Creation errored after 0s
14:02:06.937
Error: Failed to create deployment: deployments.apps "coder-ammario-ab" already exists
14:02:06.937
Error: Failed to create deployment: deployments.apps "coder-ammario-ab" already exists
14:02:06.941
14:02:06.941

I'm now dead in the water until an infrastructure admin deletes that deployment or repulls my state. What can we do to let the user reconcile broken state? This kind of downtime didn't exist in v1, and is a big threat to OSS usability IMO.


Can we have Terraform force delete all ephemeral infrastructure? This seems like it would reconcile the state.

Metadata

Metadata

Assignees

No one assigned

    Labels

    apiArea: HTTP APIsiteArea: frontend dashboard

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions