-
-
Notifications
You must be signed in to change notification settings - Fork 184
add comprehensions benchmark #265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The 3.12 failure looks like a greenlet build issue and not related to this PR. The pre-3.10 failures are expected I guess; I thought putting |
@mdboom is there a particular reviewer you are waiting on before merging this? I requested review from you and @markshannon; happy to have Mark look at it if he has time, but if not, do you feel comfortable going ahead and merging it? |
@mdboom is a triager, not a core developer so cannot merge. @kumaraditya303 could you have a look please? |
Ah, makes sense, thanks. For some reason I thought pyperformance had a different maintenance team from core CPython and that @mdboom was on it :) |
LGTM but before merging I would like to see a comparison with one of your PRs which implements comprehensions inlining against main. |
LGTM.
That seems backwards. Whether an optimization works on a particular benchmark determines the value of the optimization, not the benchmark. The question is "does this benchmark represent real-world code?", or the weaker form "does this benchmark make the benchmark suite more closely represent real-world code?" Answering those question objectively is difficult without a huge amount of real-word data, so we need to apply judgement. Given that list comprehensions are clearly under represented in the rest of benchmarks, I think that this does improve the benchmark suite overall. @kumaraditya303 do you have reason to think otherwise? |
I don't disagree, I was just curious of how much speedup inlining can make on this benchmark, anyways I'll merge this now. |
@kumaraditya303 Thanks for merging! Preliminary numbers last week showed an 11-12% improvement in this benchmark from full inlining (python/cpython#101441). I haven't measured it yet against python/cpython#101310. Now that it's merged I'll do a careful measurement against both of those PRs. |
This benchmark is loosely based on real code in an internal workload where it makes heaviest use of list comprehensions, obviously simplified and anonymized.