-
Notifications
You must be signed in to change notification settings - Fork 18
feat(flagd-rpc): adding grace attempts #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(flagd-rpc): adding grace attempts #117
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #117 +/- ##
==========================================
+ Coverage 94.04% 94.25% +0.21%
==========================================
Files 14 14
Lines 705 731 +26
==========================================
+ Hits 663 689 +26
Misses 42 42 ☔ View full report in Codecov by Sentry. |
8f59b7d
to
3f6f256
Compare
3f6f256
to
996286f
Compare
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
f2eefab
to
9c89fe4
Compare
@@ -19,10 +20,13 @@ class CacheType(Enum): | |||
DEFAULT_HOST = "localhost" | |||
DEFAULT_KEEP_ALIVE = 0 | |||
DEFAULT_OFFLINE_SOURCE_PATH: typing.Optional[str] = None | |||
DEFAULT_OFFLINE_POLL_MS = 5000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i added this to be compliant with the config gherkin test - usage of this will be fixed as soon as we touch the in-process provider
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
9c89fe4
to
518572a
Compare
@toddbaert @open-feature/sdk-python-approvers @open-feature/sdk-python-maintainers @beeme1mr @warber @chrfwow a review would be highly appreciated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please mark this PR as breaking and include a list of the breaking changes in the PR description. I'm fine making the change to bring the Python implementation into spec compliance.
What is breaking? |
Never mind, nothing public was changed. |
def handle_error(self, retry_counter: int, retry_delay: float) -> None: | ||
if retry_counter == self.retry_grace_attempts: | ||
if self.cache: | ||
self.cache.clear() | ||
self.emit_provider_error( | ||
ProviderEventDetails( | ||
message=f"gRPC sync disconnected, reconnecting in {retry_delay}s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we increase retry_delay
in handle_retry()
, so we wouldn't print the correct value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We increment the retry_delay
after the sleep, so in the next iteration, it will be increased. hence that the order of execution ensures the right values. But for clarity, I'll reorder the methods.
|
||
retry_delay = self.handle_retry(retry_counter, retry_delay) | ||
|
||
retry_counter = retry_counter + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding. The retry_counter
seems to be incremented (and the error and retry handling exectued) in every loop iteration, even if the connection succeeds, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. handle_error
is a bad name for what I am actually doing. Two things are hidden behind it.
- if we hit our grace_attempts with the
error_counter
we emit an error event - if our immediate reconnect did not work, we're emitting a stale event on the first time.
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work 🍻
self.on_connection_error(retry_counter, retry_delay) | ||
|
||
retry_delay = self.handle_retry(retry_counter, retry_delay) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just an idea. why not change retry_counter
and retry_delay
to private instance attributes? then you don't have to pass them in and return and can instead update them directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarity is an integral part of software development. If we put this on the class level, I am hiding important information, as the manipulation and influence of those properties are not fully transparent. Those are connection-specific properties handled and modified only during this method and submethods that are part of this logic. Hiding them on the instance or not showing their implications might be a nice shortcut for a smaller method footprint. Still, the clarity and connection of these variables to this method would be lost. Furthermore, they might also be used or seen as properties of the resolve
logic. Hence, we should keep them close to their use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you want to go this route, then you should extract self.retry_backoff_max_seconds
from handle_retry()
and make it a class method. they are tightly coupled and should be treated in the same way. handle_retry()
is just an extracted logic snippet without being reused anywhere. furthermore the incrementation of the retry_counter
is outside of handle_retry()
function, which contradicts your statement, because it is not clear why this is done separately and not directly inside the method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First off, I really appreciate the time and effort you put into reviewing pull requests—it’s clear you care about the quality of the work and the success of our project. That said, I wanted to share some thoughts on how we might make our collaboration on reviews a bit smoother and more effective.
Sometimes, the way your comments are phrased can come across as more directive or critical than collaborative. I know that’s likely not your intention, but it can make it a bit challenging to engage in a back-and-forth discussion. For example, it sometimes feels like suggestions are presented without much context, making it hard to fully understand your perspective or have a productive debate. Additionally, when there are differing opinions, it can feel like there’s more emphasis on following your suggestion than on finding a shared solution together.
I think we’re working on something really cool here, and it would be awesome if we could approach pull request reviews as more of a collaborative effort. To help with that, I’d recommend taking a look at this blog post, which has some great tips for making review comments more constructive and easier to follow.
I truly value your feedback, and I hope this helps make the review process feel more like a partnership. Let me know what you think, and thanks again for all the work you put into this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH the scope of this variables is not something I find that important, so neither of these refactors are a blocker for me. I can see both sides of this discussion.
As far as I can see, nothing in this PR violates the flagd spec.
providers/openfeature-provider-flagd/src/openfeature/contrib/provider/flagd/resolvers/grpc.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks consistent with the spec to me. I don't have any concerns.
adding grace attempts
Note:
The configuration changes are not breaking because we're passing them from the provider to the configuration. Hence that we're save