You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 29, 2024. It is now read-only.
When writing medium to large dataframes (multiple 50K datapoints and up) we sometimes run into server errors (HTTP 500, timeout etc). I understand this could be considered outside of the scope for the client library, but if the error is caught outside of the batch loop we can't know what data was uploaded and have to restart the upload for the entire dataframe.
Ideally, when using the the batch_size option on the DataFrameClient.write_points function it would be aware of these errors and retry a batch if needed. There are a few retry decorators out there that could be of use (e.g. https://github.com/d0ugal/retrace).
Any thoughts? Would this be a valuable addition? Would you mind taking a dependency on a retry decorator?
The text was updated successfully, but these errors were encountered:
When writing medium to large dataframes (multiple 50K datapoints and up) we sometimes run into server errors (HTTP 500, timeout etc). I understand this could be considered outside of the scope for the client library, but if the error is caught outside of the batch loop we can't know what data was uploaded and have to restart the upload for the entire dataframe.
Ideally, when using the the batch_size option on the DataFrameClient.write_points function it would be aware of these errors and retry a batch if needed. There are a few retry decorators out there that could be of use (e.g. https://github.com/d0ugal/retrace).
Any thoughts? Would this be a valuable addition? Would you mind taking a dependency on a retry decorator?
The text was updated successfully, but these errors were encountered: