-
Notifications
You must be signed in to change notification settings - Fork 887
feat: expose agent metrics via Prometheus endpoint #7011
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
agentsGauge.Reset() | ||
agentsConnectionsGauge.Reset() | ||
agentsConnectionLatenciesGauge.Reset() | ||
agentsAppsGauge.Reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a race condition. We reset the gauges, then loop over something that does db calls.
The loop can be mid way through when a scrape happens, and the numbers will be incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One cheap way to fix this is make our own prometheus.Collector
and only update the values of the metrics at the end of the function. So we store the last values while we compute the new values, and always return the last completed dataset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gauges are being reset on every loop iteration before collecting any metrics. In theory, it's possible that a scrape happens after the reset but before the end of the iteration, so metrics may disappear.
I will look into a custom collector. BTW most likely the same issue affects prometheusmetrics.Workspaces()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, that one at least doesn't do db calls in the increment, but it does affect that too.
This is my first time looking at this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a race condition. We reset the gauges, then loop over something that does db calls.
I implemented a wrapper to mitigate concurrency risks. It seems to be the simplest way to address it. Collect()
will wait a bit if there is a "flushing" procedure ongoing.
@@ -106,3 +117,159 @@ func Workspaces(ctx context.Context, registerer prometheus.Registerer, db databa | |||
}() | |||
return cancelFunc, nil | |||
} | |||
|
|||
// Agents tracks the total number of workspaces with labels on status. | |||
func Agents(ctx context.Context, logger slog.Logger, registerer prometheus.Registerer, db database.Store, coordinator *atomic.Pointer[tailnet.Coordinator], derpMap *tailcfg.DERPMap, agentInactiveDisconnectTimeout, duration time.Duration) (context.CancelFunc, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this data already live on each workspace/agent?
I wonder if there is another way to do this. In v1 we implemented a custom prometheus.Collector
to handle agent stats in a non-racy way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea was to keep it aligned with other prometheusmetrics
, and use a single source of metrics, the database. In this case, the information we present over Coderd API is consistent with Prometheus endpoint.
Regarding the prometheus.Collector
, I will take a look 👍 (as stated in the other comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the v1 implementation a bit complex for this use case. As I tried to separate metric collections apart from agent reporting logic, a collector like the one in v1 would be great if we have metrics coming from different parts of the application.
BTW It looks like the v1 collector doesn't support vectors, but here we depend mostly on them. Porting the collector would make it more complex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I didn't think we could port the v1 metrcis verbatim. How it works though is each agent uses prometheus to create their metrics. Those metrics get sent to the coderd
they are connected to with their agent, and push their prom metrics. The aggregator then combines all those metrics together, labeling them for each workspace.
The v1 collector does support labels, which is "vectors". Each unique label set is a single "metric" to the aggregator. So coderd_agents_metric{favorite-number="7"}
and coderd_agents_metric{favorite-number="1"}
are 2 different prometheus.Metric
. This matches the prometheus design of labels:
Remember that every unique combination of key-value label pairs represents a new time series
I liked the v1 design as it made it easier to add metrics from the agent, as I think we make our own payloads in v2. 🤷♂️
I was more pointing it out as making a Collector
gives you a lot more freedom on how to manipulate the Gather
part of the metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How it works though is each agent uses prometheus to create their metrics. Those metrics get sent to the coderd they are connected to with their agent, and push their prom metrics.
That is one approach, but unfortunately, we would miss metrics from disconnected agents.
As stated in the PR description, the idea behind this submission is to expose metric data we have already collected and stored in the database. If we have this data already stored in the database, why just don't use it :) A similar story would be with "agent stats" that are stored in a dedicated database table.
Let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disconnected agent stats in v1 Prometheus are eventually "stale" and then removed. Since Prometheus doesn't need to be a perfect source of truth (a little lag eg 1min is ok imo).
I agree with you though to just expose what we currently have is the go to move. My initial hunch was to make a collector that has all the internal counters you are trying to track.
When the "Gather" func is called, the Collector returns the cached counters.
Every "update" perioud, new counts are created and incremented in an internal loop. When the counts are finished, then the Collector is locked, all counters updated, unlocked.
So the exposed counters are always "correct"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial hunch was to make a collector that has all the internal counters you are trying to track.
The idea is good for sure, it's just a matter of the capacity we have 👍
Every "update" perioud, new counts are created and incremented in an internal loop. When the counts are finished, then the Collector is locked, all counters updated, unlocked.
Yup, this is more or less what I've implemented on the gauge vector level: CachedGaugeVec. I just found it simpler compared with the concept of a collector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that makes sense and looks good 👍
agentsGauge.Reset() | ||
agentsConnectionsGauge.Reset() | ||
agentsConnectionLatenciesGauge.Reset() | ||
agentsAppsGauge.Reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One cheap way to fix this is make our own prometheus.Collector
and only update the values of the metrics at the end of the function. So we store the last values while we compute the new values, and always return the last completed dataset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nits, LG
func (v *CachedGaugeVec) Describe(desc chan<- *prometheus.Desc) { | ||
v.m.Lock() | ||
defer v.m.Unlock() | ||
|
||
v.gaugeVec.Describe(desc) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually does not need the mutex. Describe
is safe and does not return the counter, which is what you are protecting.
Describe
is not really called much in prod, if ever, so it's not that big of a deal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure if Describe
should be guarded or not, so thanks for raising it. I removed the mutex from the function.
type CachedGaugeVec struct { | ||
m sync.Mutex | ||
|
||
gaugeVec *prometheus.GaugeVec | ||
records []vectorRecord | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you doc the usage? And the why?
Eg:
CachedGaugeVec does .....
Calling WithLabelValues will update the internal gauge value. The value will not be returned by 'Collect' until 'Commit' is called.
'Commit' will reset the internal value, requiring the next set of values to build upon a completely reset metric.
Or something...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added 👍
}) | ||
} | ||
|
||
func (v *CachedGaugeVec) Commit() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Commit will set the internal value as the cached value to return from 'Collect'.
// The internal metric value is completely reset, so the caller should expect
// the gauge to be empty for the next 'WithLabelValues' values.
func (v *CachedGaugeVec) Commit() { | |
func (v *CachedGaugeVec) Commit() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment added.
switch record.operation { | ||
case VectorOperationAdd: | ||
g.Add(record.value) | ||
case VectorOperationSet: | ||
g.Set(record.value) | ||
default: | ||
panic("unsupported vector operation") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might want this switch statement on the WithLabelValues
call so the panic is closer to the source.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This switch is also useful to pick the right operation, so I will add another switch-case-panic to WithLabelValues
.
switch operation { | ||
case VectorOperationAdd: | ||
case VectorOperationSet: | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I usually prefer this. But it does not matter.
switch operation {
case VectorOperationAdd, VectorOperationSet:
default:
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related: #6724
This PR exposes basic agent metrics via the Prometheus endpoint. It is a low-hanging fruit as all the information is already exposed via HTTP API.