-
Notifications
You must be signed in to change notification settings - Fork 705
OTLPMetricExporter fails to send more than 4MB of data #2710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for trying out the metrics SDK 🙂 The difference between metrics and trace/logs here is that all the metrics come in at once and there is no batching in the SDK. It is simply evaluating all of observable instruments and it's causing the issues. Do folks think we should add this batching mechanism to the
+1, there is this proposal open-telemetry/opentelemetry-proto#390 which I believe would make this work? |
How does this help? The volume of data that reaches exporter would still be of same size for each export cycle, right? |
As a place to configure the batch size, and then the |
Yes, I think we already do this in some exporters which take care of some protocol/coding specific limits. I would like to hear more the batching in metrics. I am trying to understand if each |
We briefly discussed this today. We should look at the spec for what is the correct status code for errors related to payload size and see if the response from collector includes what's the acceptable size to devide the batch to chunks before exporting. |
It would be great to have this kind of behavior but I think we should also be able to configure a "max batch size" in the OTLPMetricExporter. |
@overmeulen any chance you'd be willing to send a PR for this? |
Sure. So the idea would be to do the fix directly in the gRPC exporter ? |
Thanks, I'll assign you the issue. We haven't implemented the HTTP exporter yet, so that seems reasonable to me. |
Is there some env or something similar spec'd to configure this max batch size? |
I was thinking of doing something similar to BatchLogProcessor |
PR created and ready to be reviewed |
Just adding the discussions from the PR #2809 which adds a
We are going ahead with this for now to keep it simple. Two alternatives would be
There's also the option of sending requests in parallel, where #2809 is sending each chunk serially. |
Describe your environment
Python 3.6.8
Opentelemetry Python 1.12.0
Opentelemetry Collector 0.38.0
Steps to reproduce
Metrics SDK with quite a few observables generating more than 4MB of data (datapoints)
What is the expected behavior?
The datapoints are sent to the collector without any problem
What is the actual behavior?
The export to the collector fails with StatusCode.RESOURCE_EXHAUSTED
The exporter keeps on sending the same batch over and over until the data gets dropped
Additional context
One solutions would by to have a configurable "max batch size" in the OTLPMetricExporter, like there is today in the BatchLogProcessor for logs.
Another solution would be for the OTLP gRPC exporter to automatically retry with a smaller batch if it receives a StatusCode.RESOURCE_EXHAUSTED ?
The text was updated successfully, but these errors were encountered: