Skip to content

add comprehensions benchmark #265

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 20, 2023
Merged

add comprehensions benchmark #265

merged 2 commits into from
Feb 20, 2023

Conversation

carljm
Copy link
Member

@carljm carljm commented Feb 15, 2023

This benchmark is loosely based on real code in an internal workload where it makes heaviest use of list comprehensions, obviously simplified and anonymized.

@carljm
Copy link
Member Author

carljm commented Feb 15, 2023

The 3.12 failure looks like a greenlet build issue and not related to this PR.

The pre-3.10 failures are expected I guess; I thought putting requires-python = ">=3.10" in the pyproject.toml for the new benchmark would cause it to be skipped on earlier python versions, but apparently not. I guess I will just remove the 3.10-only type annotations from the benchmark (they aren't really relevant or needed anyway) so it can work on 3.7+.

@carljm carljm requested review from mdboom and markshannon February 15, 2023 16:54
@carljm
Copy link
Member Author

carljm commented Feb 17, 2023

@mdboom is there a particular reviewer you are waiting on before merging this? I requested review from you and @markshannon; happy to have Mark look at it if he has time, but if not, do you feel comfortable going ahead and merging it?

@arhadthedev
Copy link
Member

@mdboom is a triager, not a core developer so cannot merge.

@kumaraditya303 could you have a look please?

@carljm
Copy link
Member Author

carljm commented Feb 18, 2023

Ah, makes sense, thanks. For some reason I thought pyperformance had a different maintenance team from core CPython and that @mdboom was on it :)

@kumaraditya303
Copy link
Contributor

LGTM but before merging I would like to see a comparison with one of your PRs which implements comprehensions inlining against main.

@markshannon
Copy link
Member

LGTM.

before merging I would like to see a comparison with one of your PRs which implements comprehensions inlining against main.

That seems backwards. Whether an optimization works on a particular benchmark determines the value of the optimization, not the benchmark.

The question is "does this benchmark represent real-world code?", or the weaker form "does this benchmark make the benchmark suite more closely represent real-world code?"

Answering those question objectively is difficult without a huge amount of real-word data, so we need to apply judgement.

Given that list comprehensions are clearly under represented in the rest of benchmarks, I think that this does improve the benchmark suite overall.

@kumaraditya303 do you have reason to think otherwise?

@kumaraditya303
Copy link
Contributor

kumaraditya303 commented Feb 20, 2023

do you have reason to think otherwise?

I don't disagree, I was just curious of how much speedup inlining can make on this benchmark, anyways I'll merge this now.

@kumaraditya303 kumaraditya303 merged commit 433550e into python:main Feb 20, 2023
@carljm
Copy link
Member Author

carljm commented Feb 20, 2023

@kumaraditya303 Thanks for merging! Preliminary numbers last week showed an 11-12% improvement in this benchmark from full inlining (python/cpython#101441). I haven't measured it yet against python/cpython#101310. Now that it's merged I'll do a careful measurement against both of those PRs.

@carljm carljm deleted the comps branch February 20, 2023 21:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants