Skip to content

[PECO-1259] Implement retry behavior for CloudFetch #211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jan 17, 2024

Conversation

kravets-levko
Copy link
Contributor

@kravets-levko kravets-levko commented Dec 11, 2023

Signed-off-by: Levko Kravets <levko.ne@gmail.com>
Signed-off-by: Levko Kravets <levko.ne@gmail.com>
@codecov-commenter

This comment was marked as outdated.

Copy link

@susodapop susodapop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a couple comments.


// Delay interval depends on current attempt - the more attempts we do
// the longer the interval will be
// TODO: Respect `Retry-After` header (PECO-729)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, this is important. When the server responds with 429 or 503 it includes a Retry-After header that we should monitor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's why we have PECO-729. This PR (and this particular code) is almost untouched, just moved to a different place so it could be used for cloudfetch. After we merge this, I'm going to make it match PySQL's behavior, as you mentioned in your other comment below

// TODO: Respect `Retry-After` header (PECO-729)
const retryDelay = getRetryDelay(this.attempt, clientConfig);

const attemptsExceeded = this.attempt >= clientConfig.retryMaxAttempts;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think you can bump this logic up a few lines. If we already exceeded the maximum number of attempts there's no reason to get the retry delay.


return { shouldRetry: true, retryAfter: retryDelay };

// TODO: Here we should handle other error types (see PECO-730)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -265,7 +280,7 @@ describe('BaseCommand', () => {
};

const command = new CustomCommand(
new ThriftClientMock(() => {
new ThriftClientMock(context, () => {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an integration test that verifies when requests are retried?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet, but I'm going to add some after implementing PECO-729 (which should wrap up the whole retry feature)

Copy link

@susodapop susodapop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving as this is a strict refactor. Improvements to come in a subsequent PR.

@kravets-levko kravets-levko merged commit a4e4aa7 into main Jan 17, 2024
@kravets-levko kravets-levko deleted the PECO-1259-retry-policy branch January 17, 2024 19:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants