-
Notifications
You must be signed in to change notification settings - Fork 881
feat(support): fetch data concurrently #12385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it works as-is, but could be simplified quite a bit with errgroup.
4e123d4
to
b0cd05a
Compare
support/support.go
Outdated
var ( | ||
wg sync.WaitGroup | ||
m sync.Mutex | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch for errgroup too?
support/support.go
Outdated
if err != nil { | ||
log.Error(ctx, "fetch build info", slog.Error(err)) | ||
} else { | ||
eg, ctx := errgroup.WithContext(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just realized we only log errors, so we might be interested in partial results?
If so, might make sense to just use eg := &errgroup.Group{}
. This way the other Go routines won't be interrupted when one runs into an error. (Sorry I didn't take this into account in my previous review.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
out of curiosity: how much time do we save by switching to the concurrent mode?
We're going from about 13 serial requests right now to potentially having 12/13 of those in parallel. So quite a bit, depending on round trip time. |
I hope we don't hit any Coder rate limiter or customer's proxy :) |
Fair, I'll set max concurrency to 4 as our default rate limit is 8 requests and change per second. Edit: actually, if folks run into issues with this we can just set GOMAXPROCS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neat 👍
Do we have any idea how long these steps take in practice?
Do we need timing info for each step? Presumably we're parallelising the steps because the fetching currently takes too long.
Something to keep in mind: We should probably use https://github.com/uber-go/automaxprocs unless it's been disregarded previously. |
The rationale for concurrent fetching is mainly from this comment I don't think we need timing info for the moment; and it can be inferred from the debug-level client logs that get emitted ( |
I've been on the fence about adding this; it's been bugging me for a while that the Go runtime isn't CFS-aware. It would be a big enough change though. Maybe a separate PR? |
We had a PR for enabling automaxprocs in the agent, but it seems to have fallen into oblivion. I'm all for it, though. |
Fetches data concurrently to speed up process of generating a support bundle.