-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Python 3.9 upgrade #24980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python 3.9 upgrade #24980
Conversation
pre-commit.ci autofix |
"/{name}{{{bbox} sc\n".format( | ||
name=font.get_glyph_name(glyph_id), | ||
bbox=" ".join(map(str, [g.horiAdvance, 0, *g.bbox])), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that one can get f-strings here as well (although those will be horrible...) if it is executed again?
In general, I'm tempting to skip the f-string conversion and let it evolve "organically" as some of them are quite painful as @ksunden points out (others are obviously a clear improvement). Also, some that are turned into format should probably be f-strings (others not). (Interesting that some unpacking comprehensions are converted into tuples, but not things like Edit: to be clear, things that are not f-string conversions, I do not object to. |
@@ -231,7 +231,7 @@ def _check_versions(): | |||
|
|||
# The decorator ensures this always returns the same handler (and it is only | |||
# attached once). | |||
@functools.lru_cache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these should be converted to functions.cache for clarity, in fact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated some of them where I thought an unbounded cache was alright, but others I figured we can handle later if we want to change to unbounded rather than the default size of 128. When I was looking through the lru_cache
locations it looked like we may even have some decorated instance methods with self that may be keeping instances alive, I wonder if that is causing any of the slowly growing memory leaks that have been reported...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was specifically referring to cases where the cache just serves to establish a singleton, like here.
42e6fd5
to
3f8e632
Compare
I went through every |
89c996d
to
5c9d97e
Compare
I think it would be a good idea to add a |
Yes, I agree and we already have one that I updated here :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good; I can drop my f-string stash now.
5c9d97e
to
3e720e4
Compare
Thanks, @QuLogic, good recommendations as usual! |
Command: `pyupgrade --py39-plus` Manual updates to add some f-strings and ignore f-strings in tex formats.
3e720e4
to
63a8ad4
Compare
PR Summary
Now that we are on Python 3.9+ (#24919), we can update/simplify some of the code.
Automated command line utility for it:
A lot of this is f-string updates, so I added the first commit to the git ignore blame so there is less churn in the blame lookups.
There are also some OSError aliases and getting rid of the unnecessary parentheses in the
lru_cache
decoratorPR Checklist
Documentation and Tests
pytest
passes)Release Notes
.. versionadded::
directive in the docstring and documented indoc/users/next_whats_new/
.. versionchanged::
directive in the docstring and documented indoc/api/next_api_changes/
next_whats_new/README.rst
ornext_api_changes/README.rst