-
Notifications
You must be signed in to change notification settings - Fork 889
feat: add logSender for sending logs on agent v2 API #12046
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. Join @spikecurtis and the rest of your teammates on |
9d29b4a
to
b850ab4
Compare
25c4047
to
d71e92e
Compare
d71e92e
to
4bcbd7e
Compare
4bcbd7e
to
0f99947
Compare
// sendLoop sends any pending logs until it hits an error or the context is canceled. It does not | ||
// retry as it is expected that a higher layer retries establishing connection to the agent API and | ||
// calls sendLoop again. | ||
func (l *logSender) sendLoop(ctx context.Context, dest logDest) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please take into consideration what context is passed here when this code is used. On shutdown the context will likely be cancelled but we still want to try to send all the logs.
0f99947
to
15855f4
Compare
04d7016
to
ed748d4
Compare
15855f4
to
3047485
Compare
3047485
to
ad9d94a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few remaining nits, but otherwise this is looking good! Nice work.
I wonder if it would make sense to move more of the logic to enqueue
vs sendLoop
, though? Pre-processing allows us to avoid the temporary memory consumption for data we don't want.
q.logs[i] = nil | ||
} | ||
q.logs = q.logs[n:] | ||
l.outputLen -= outputToRemove |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need to decrement this?
I mean, it's probably fine, but in theory I supposed we could end up queueing maxBytesQueued
even after the database is near-full. Let's say we sent maxBytesQueued-1
, then the connection is interrupted, and we queue up another maxBytesQueued
only to be discarded after connection is re-established.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't want to reimplement the server side limit in the agent. So, if the server changes the limit the agents would still work with the new limit.
I get that we want to have some agent side limit since the logs are stored in memory, and so the current server limit feels appropriate. But, as long as the agent is keeping pace relatively well sending logs to the server, I don't want to stop it sending until the server says we've reached the limit.
446cb12
to
5f74643
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one minor nit, but other than that, looks good 👍
pl, err := agentsdk.ProtoFromLog(log) | ||
if err != nil { | ||
logger.Critical(context.Background(), "failed to convert log", slog.Error(err)) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we don't give an error, perhaps continue
is better here so we don't discard the entire batch?
5f74643
to
f25da2c
Compare
Merge activity
|
Adds a new subcomponent of the agent for queueing up logs until they can be sent over the Agent API.
Subsequent PR will change the agent to use this instead of the HTTP API for posting logs.
Relates to #10534