-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Add structured metrics instrumentation #12230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome to LocalStack! Thanks for raising your first Pull Request and landing in your contributions. Our team will reach out with any reviews or feedbacks that we have shortly. We recommend joining our Slack Community and share your PR on the #community channel to share your contributions with us. Please make sure you are following our contributing guidelines and our Code of Conduct.
All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! This is a really great push for the analytics clients such that they provide a clear interface for common use cases, which will make it easier for us to use while making sure that the structure of the events being sent is nicely digestible 😄
I had a few questions and suggestions to follow up, happy to discuss them directly if you want! 👥 🚀
question: I'm considering whether to replace exceptions with logs entirely or use them only in specific cases (such as during instantiation or when debug mode is enabled) to reduce as much as possible forcibly interrupting the emulator due to metrics instrumentation. @alexrashed, what’s your take on this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for acting on the comments from the previous reviews! A lot of the comments are resolved. I don't really see the necessity of the complex structure between the different Counter
classes, but that shouldn't be a blocker.
I also added some more comments to discuss.
I feel like a lot of the complexity is coming from the need to improve the developer experience when using the new module, but then there are no real-world usages yet.
I think it is crucial to see this in action by migrating existing callers of the usage.py
module. This will make it way more clearer how this framework performs in terms of developer experience. 😄
b7f8425
to
36bdb0b
Compare
28dbd9e
to
6ddfc41
Compare
…None, float or str values for labels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added a few nitpick comments here, but from my perspective we are ready to roll!
Given that this is targeted for 4.3, and you might want to iterate over it one more time, we should be ready to merge this after the 4.2 release tomorrow! 🚀
…d empty check for name attribute, adjusted data tupe for _label_values and _counters_by_label_values
FYI: I'm fixing unit-tests right now! |
… from the Metric superclass
…rror when labels list is empty
…s list is passed, instead of falling back to _CounterMetric
…y call superclass initializers in _CounterMetric
Motivation
The existing Counter implementation follows a parsing-based approach, relying on metric name patterns like label1:label2:* and inferring implicit relationships through JSON structures, dictionaries, or other nested data formats. Since these structures can vary depending on the feature or how each feature owner decides to implement them, querying, filtering, and aggregating metrics become inconsistent and complex. This lack of explicit structure makes it harder to standardize analysis across different use cases.
This refactor introduces a structured and extensible approach to metric collection, enforcing clear label standards and dimensional analysis while making all relationships and classifications explicit, rather than relying on naming conventions, positional placement, or nested JSON structures. By ensuring consistency, we can simplify querying, filtering, and aggregations, making analytics more scalable and reducing the need for custom metric handling.
Changes
metrics.py
, making them a standalone concept, not strictly tied tousage.py
.MetricRegistry
as a singleton for managing all registered metrics (Counters, Gauges, etc.).Metric
as the base interface for metrics, enforcing the implementation of thecollect()
method in all metric types (e.g., Counters, Gauges).ls:usage
tols_metrics
Usage Examples
Basic Counter
A Counter tracks the occurrences of an event. It can be used without labels for simple counting:
Counters with Labels
Counters can include labels to categorize events more effectively:
Testing
Unit Tests (No External Services Required)
End-to-End Testing with the Analytics Backend
For a more in-depth test, follow these steps:
ANALYTICS_API="http://localhost:8000/v1" \ DEBUG=1 \ DEBUG_ANALYTICS=1 \ python -m localstack.cli.main start --host
TODO
What's left to do: