Papers
arxiv:2504.01833

YourBench: Easy Custom Evaluation Sets for Everyone

Published on Apr 2
· Submitted by sumuks on Apr 2
Authors:
,
,

Abstract

Evaluating large language models (LLMs) effectively remains a critical bottleneck, as traditional static benchmarks suffer from saturation and contamination, while human evaluations are costly and slow. This hinders timely or domain-specific assessment, crucial for real-world applications. We introduce YourBench, a novel, open-source framework that addresses these limitations by enabling dynamic, automated generation of reliable, up-to-date, and domain-tailored benchmarks cheaply and without manual annotation, directly from user-provided documents. We demonstrate its efficacy by replicating 7 diverse MMLU subsets using minimal source text, achieving this for under 15 USD in total inference costs while perfectly preserving the relative model performance rankings (Spearman Rho = 1) observed on the original benchmark. To ensure that YourBench generates data grounded in provided input instead of relying on posterior parametric knowledge in models, we also introduce Tempora-0325, a novel dataset of over 7K diverse documents, published exclusively after March 2025. Our comprehensive analysis spans 26 SoTA models from 7 major families across varying scales (3-671B parameters) to validate the quality of generated evaluations through rigorous algorithmic checks (e.g., citation grounding) and human assessments. We release the YourBench library, the Tempora-0325 dataset, 150k+ question answer pairs based on Tempora and all evaluation and inference traces to facilitate reproducible research and empower the community to generate bespoke benchmarks on demand, fostering more relevant and trustworthy LLM evaluation.

Community

Paper author Paper submitter

we're launching 🤗 yourbench today, an open source tool for custom benchmarking and synthetic data generation from ANY of your documents. it's a big step towards improving how model evaluations work

most benchmarks test general capabilities, but we know that for many use cases what really matters is how well a model performs your specific task. yourbench lets you evaluate models on what matters to you.

you can try it with your own docs today!: https://huggingface.co/spaces/yourbench/demo

·

Congrats on the feature release @sumuks

But data generation and judges for task-specific skills assessment have been around for at least a year, people are struggling to choose the best foundation model for their application.

That's because offline evaluation techniques like this are unreliable predictors for how users will respond to changes in the app. It's a nuanced engineering decision considering factors such as latency or other real-world constraints to deploying the model.

In fact, most "great ideas" for software changes across industry fail to meaningfully impact business metrics and user engagement when measured with online evaluation methods like A/B testing.

Tools like this may help in the ramp up to the real evaluation, but only if you're applying the scientific method to identify these predictors, closing the loop in your AI development and evaluation.

Wrote more on trustworthy AI experiments here: https://www.remyx.ai/blog/trustworthy-ai-experiments

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.01833 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.01833 in a Space README.md to link it from this page.

Collections including this paper 2