-
Notifications
You must be signed in to change notification settings - Fork 336
Refactored retry config into _retry.py
and added support for exponential backoff and Retry-After
header
#871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: fcm-http2
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Jonathan! Overall looking good! Added a few comments!
firebase_admin/_retry.py
Outdated
class HttpxRetry: | ||
"""HTTPX based retry config""" | ||
# TODO: Decide | ||
# urllib3.Retry ignores the status_forcelist when respecting Retry-After header |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean urllib3.Retry
retries other status codes (that's not in status_forcelist
) with Retry-After
headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I worded that comment poorly so you can disregard it.
What I wanted to point out that in urllib3.Retry
there is no default status codes that are retried if status_forcelist
and respect_retry_after_header
both are not set. These RETRY_AFTER_STATUS_CODES
are only applied if respect_retry_after_header
is set.
I decided to do the same in this implementation but still wanted to highlight it because the urllib3 docs were misleading and implied these are used as status_forcelist
defaults.
firebase_admin/_retry.py
Outdated
return False | ||
|
||
# Placeholder for exception retrying | ||
def is_retryable_error(self, error: Exception): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is best not to leave placeholders in PRs if we can. If this is currently a no-op we can remove it and add it when we need it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, removed!
firebase_admin/_retry.py
Outdated
self.history.append((request, response, error)) | ||
|
||
|
||
# TODO: Remove comments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove these. Or add a comment on what it doesn't cover as a TODO for future. I prefer we use a bug/issue to track these instead of TODO comments when possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, these todos were mainly markers so as to not miss removing some thought process comments before final merge.
Moving the missing features notes to a bugs.
firebase_admin/_retry.py
Outdated
|
||
def __init__( | ||
self, | ||
status: int = 10, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to call this max_retries
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the current state yes because status retries are our only retries we have. This was more of a future proof decision for when we added error retires where we would have total, status and error counters.
Similar to the placeholder comment I think we can use max_retries
(and retries_left
internally since we decrease this value) and add the other options later.
response, error = None, None | ||
|
||
try: | ||
logger.debug('Sending request in _dispatch_with_retry(): %r', request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we plan to keep these logs in the production code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would want to keep these for future debugging. This would be helpful to catch issues as we iterate. Wdyt?
firebase_admin/_retry.py
Outdated
if error and not retry.is_retryable_error(error): | ||
raise error | ||
|
||
retry.increment(request, response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we pass the error
here? retry.increment(request, response, error)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good catch! Fixed
firebase_admin/_retry.py
Outdated
return response | ||
if error: | ||
raise error | ||
raise Exception('_dispatch_with_retry() ended with no response or exception') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's narrow this down to an Exception type RuntimeError
or AssertionError
might work better here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go with AssertionError
since this case should never be reached if the logic above is correct.
|
||
logger = logging.getLogger(__name__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to set the log level in production code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Developers should be able to set the logging level in their production apps by using the following code:
import logging
logging.basicConfig()
firebase_admin_logger = logging.getLogger('firebase_admin')
firebase_admin_logger.setLevel(logging.DEBUG)
# Simulate attempt 6 completed | ||
retry.increment(self._TEST_REQUEST, response) | ||
# History len 6, attempt 7 -> 0.5*(2^4) = 10.0 | ||
assert retry.get_backoff_time() == pytest.approx(10.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be 0.5 * (2^5) = 16.0
? and clammed at 10 due to backoff_max
? If that's the case let's clarify it here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that should be the correct calculation, fixed!
firebase_admin/_retry.py
Outdated
self.status_forcelist = status_forcelist | ||
self.backoff_factor = backoff_factor | ||
self.backoff_max = backoff_max | ||
self.raise_on_status = raise_on_status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If raise_on_status
is unused we should just remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
Adds more retry logic support
This PR covers:
HttpxRetryTransport
to_retry.py
HttpxRetry
to manage retry stateHttpxRetryTransport
creationRetry-After
headers from request responsesHttpxRetry
andHttpxRetryTransport