Skip to content

[benchmark] Add HF LLM benchmarks #156967

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

[benchmark] Add HF LLM benchmarks #156967

wants to merge 1 commit into from

Conversation

angelayi
Copy link
Contributor

@angelayi angelayi commented Jun 26, 2025

@angelayi angelayi requested review from zou3519 and anijain2305 June 26, 2025 17:02
Copy link

pytorch-bot bot commented Jun 26, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/156967

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 441527d with merge base 80cca83 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@BoyuanFeng
Copy link
Contributor

Thanks for adding more models! A few minor comments. Also, please fix the ci.

@BoyuanFeng
Copy link
Contributor

Curious, will we add these models into existing Huggingface column or a new column called "huggingface_llm"? It might be a bit confusing with two columns starting with "huggingface"..

image

@angelayi angelayi force-pushed the angelayi/benchmark2 branch 4 times, most recently from 0749c30 to bbf4a09 Compare August 11, 2025 15:37
@angelayi
Copy link
Contributor Author

@BoyuanFeng yes! I have updated to merge everything into the huggingface column.

@angelayi angelayi marked this pull request as ready for review August 11, 2025 15:43
elif args.export_nativert:
frozen_model_iter_fn = export_nativert(model, example_inputs)
use_generate_mode = kwargs.get("use_generate_mode", False)
if use_generate_mode:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this use_generate_mode flag so that we only apply torch.compile/export to model.forward, instead of applying it to model.generate

@angelayi angelayi force-pushed the angelayi/benchmark2 branch from bbf4a09 to f48faf2 Compare August 11, 2025 16:06
@angelayi angelayi force-pushed the angelayi/benchmark2 branch from f48faf2 to 441527d Compare August 12, 2025 04:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants