Skip to content

chore: replace wsconncache with a single tailnet #8176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 26 commits into from
Jul 12, 2023
Merged

Conversation

coadler
Copy link
Contributor

@coadler coadler commented Jun 22, 2023

This PR mostly removes wsconncache from coderd, instead replacing it with a single Tailnet per coderd. The biggest benefit to this is not needing a cache of Tailnets sitting around for each agent. Additionally, by reusing the same http.Transport for proxying to workspaces, we're able to reuse TCP conns between requests instead of them being open and closed after each request. This is the same strategy we use in wgtunnel.

I made some changes to the coordinator interface to support coderd hooking directly in, so I'd definitely like some feedback on that. I haven't yet implemented it for pgcoord.

Left out of this PR is support for moons. This was getting a bit too big, so I'll instead do that in a followup. Moons will still use wsconncache for the time being.

@coadler coadler self-assigned this Jun 22, 2023
@coadler coadler marked this pull request as ready for review June 26, 2023 18:09
@coadler coadler force-pushed the colin/rm-wsconncache2 branch from 0ba8fa2 to 7222135 Compare June 26, 2023 19:36
@@ -0,0 +1,169 @@
package agenttest
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was essentially copy/pasted out of agent_test.go so it can be reused.

Copy link
Member

@kylecarbs kylecarbs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't manually test this, but the code looks good. It's sick that it's backward-compatible.

We should have a timeline on the removal of wsconncache, because that seems like some big dead weight.


// NewServerTailnet creates a new tailnet intended for use by coderd. It
// automatically falls back to wsconncache if a legacy agent is encountered.
func NewServerTailnet(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although maybe a bit weird, I'd argue we should put this in workspaceagents.go since wsconncache will be going away.

@coadler coadler requested a review from spikecurtis June 28, 2023 06:50
@coadler
Copy link
Contributor Author

coadler commented Jun 28, 2023

Ignore test failures, will fix those in the morning.

@@ -123,6 +123,11 @@ func NewPGCoord(ctx context.Context, logger slog.Logger, ps pubsub.Pubsub, store
return c, nil
}

func (c *pgCoord) ServeMultiAgent(id uuid.UUID) agpl.MultiAgentConn {
_, _ = c, id
panic("not implemented") // TODO: Implement
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this effectively torpedos the pgCoord, so we can't merge this PR until fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I'm still mostly flushing out the API I didn't think it made sense to impl on pgCoord yet. Definitely a blocker for merge though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I think what we need to do is to drop the agent_id column from tailnet_clients table, and add a new table that tracks the agents that each client wants to connect to

CREATE TABLE tailnet_subscriptions (
    client_id uuid NOT NULL,
    coordinator_id uuid NOT NULL,
    agent_id uuid NOT NULL,
    PRIMARY KEY (client_id, coordinator_id, agent_id),
    FOREIGN KEY (client_id) REFERENCES tailnet_clients(id) ON DELETE CASCADE,
    FOREIGN KEY (coordinator_id) REFERENCES tailnet_coordinators(id) ON DELETE CASCADE
);

@spikecurtis
Copy link
Contributor

FWIW, I think the right architecture for both the in-memory and distributed coordinator is to consider an individual client connection as a special case of the more general "multiagent" idea. That is, a normal end user client is a multiagent where the number of agents is exactly one.

It's much easier to maintain and understand code that computes the general case, and then at the edge, we have adaptations the wire up clients with exactly one agent, rather than trying to build two different sets of logic and keep them consistent.

So, in the core logic of the coordinator, clients are all multi-agent, and we compute and queue up updates on them based on the set of agents they've asked to connect to.

Then, at the very edge we have special case code that either serializes the nodes over the websocket, or sends them out over channels/callbacks depending on whether we have an end user, remote client, or a coderd, in-process client.

// connection to.
agentNodes map[uuid.UUID]*tailnetNode

transport *http.Transport
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so nice, great work!

Comment on lines 181 to 185
// multiAgents holds all of the unique multiAgents listening on this
// coordinator. We need to keep track of these separately because we need to
// make sure they're closed on coordinator shutdown. If not, they won't be
// able to reopen another multiAgent on the new coordinator.
multiAgents map[uuid.UUID]*MultiAgent
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Havent added this to the haCoordinator yet.

@coadler coadler requested a review from spikecurtis June 29, 2023 01:41
// agentToConnectionSockets maps agent IDs to connection IDs of conns that
// are subscribed to updates for that agent.
agentToConnectionSockets map[uuid.UUID]map[uuid.UUID]*TrackedConn
agentToConnectionSockets map[uuid.UUID]map[uuid.UUID]Enqueueable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great!

@coadler coadler requested a review from spikecurtis June 30, 2023 04:24
}
delete(s.agentNodes, agentID)

// TODO(coadler): actually remove from the netmap
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do this before merge

// It is exported so that tests can use it.
const WriteTimeout = time.Second * 5

type TrackedConn struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call on moving this stuff to its own file. Much more readable.

Stats() (start, lastWrite int64)
Overwrites() int64
CoordinatorClose() error
Close() error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the distinction between CoordinatorClose() and Close() --- how are they different and why do we need both?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's sort-of a chicken and egg problem. When you want to close the individual queue, you expect it to be removed from the coordinator. When the coordinator is closing, you want the coordinator to close all of the queues, but you don't want the queues to reach back into the coordinator to close themselves and reenter the mutex.

}

func (m *MultiAgent) UpdateSelf(node *Node) error {
return m.OnNodeUpdate(m.ID, node)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

closing the MultiAgent feels incomplete to me since it seems that we would still send updates, subscribes, and unsubscribes into the core, even after it's been closed.

Furthermore, NextUpdate() runs on is own, separate Context, rather than respecting the closed state of the MultiAgent. This seems like a recipe for zombie goroutines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NextUpdate() does respect the closed state of MultiAgent since m.updates gets closed when the MultiAgent is closed.

@coadler coadler requested a review from spikecurtis July 7, 2023 17:21
@coadler
Copy link
Contributor Author

coadler commented Jul 11, 2023

I've made this opt-in via a feature flag, so it won't be enabled by default. The implementations for pgcoord and moons aren't done, but I plan on addressing that in follow-up PRs due to this one getting huge. After both are resolved, I'll remove the feature flag.

Copy link
Contributor

@spikecurtis spikecurtis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand this PR is getting big and unwieldy, so I'm fine with it going in behind an experiment flag, but I would like to see a GitHub epic and GitHub issues tracking the things that need a resolution before this can go GA.

s.nodesMu.Lock()
agentConn := s.getAgentConn()
for agentID, node := range s.agentNodes {
if time.Since(node.lastConnection) > cutoff {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This measures the last time we started a connection, not the last time the connection was used. If we proxy a long-lived connection like ReconnectingPTY or a devURL websocket, it could easily be in use for greater than 30 minutes.

We might need some ref-counting to keep track of the connections to each agent, so that we expire them when they are no longer used.

@coadler coadler merged commit d7cbdbd into main Jul 12, 2023
@coadler coadler deleted the colin/rm-wsconncache2 branch July 12, 2023 22:38
@github-actions github-actions bot locked and limited conversation to collaborators Jul 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants