Dataset Overview
LiveResearchBench provides expert-curated, real-world tasks spanning daily life, enterprise, and academia, each requiring extensive, real-time web search, multi-source reasoning, and cross-domain synthesis. DeepEval offers human-aligned protocols for reliable, systematic evaluation of agentic systems on open-ended deep research tasks.
π Quick Links
Dataset Fields
Subsets:
question_with_checklist: Full dataset with questions and per-question checklistsquestion_only: Questions without checklists
For each entry in the dataset:
{
'qid': 'market6VWmPyxptfK47civ', # Unique query identifier
'question': 'What is the size, growth rate...', # Research question
'checklists': [ # List of checklist items for coverage evaluation
'Does the report provide data for the U.S. electric vehicle market...',
'Does the report discuss the size, growth rate...',
# ... more items
]
}
Loading the Dataset
Default: Static Mode (No Placeholders)
The default static mode loads questions and checklists with dates already filled in (e.g., 2025 instead of {{current_year}}):
from liveresearchbench.common.io_utils import load_liveresearchbench_dataset
# Load static version
benchmark_data = load_liveresearchbench_dataset(use_realtime=False)
Example:
- Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?"
Realtime Mode
For dynamic evaluation with current dates, use realtime mode:
# Load realtime version (replaces {{current_year}} etc.)
benchmark_data = load_liveresearchbench_dataset(use_realtime=True)
The following placeholders will be replaced by the current date:
{{current_year}}β 2025 (current year){{last_year}}β 2024 (current year - 1){{current_date}}β October 29, 2025 (formatted date)
Example:
- Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?" (automatically updated each year)
Accessing Questions and Checklists
from liveresearchbench.common.io_utils import (
load_liveresearchbench_dataset,
get_question_for_qid,
get_checklists_for_qid
)
# Load dataset
benchmark_data = load_liveresearchbench_dataset()
# Get question for a specific query ID
qid = "market6VWmPyxptfK47civ"
question = get_question_for_qid(benchmark_data, qid)
# Get checklist items for a specific query ID
checklists = get_checklists_for_qid(benchmark_data, qid)
print(f"Found {len(checklists)} checklist items")
Ethical Considerations
This release is for research purposes only in support of an academic paper. Our datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact peopleβs lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
Citation
If you find this dataset helpful, please consider citing:
@article{sfr2025liveresearchbench,
title={LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild},
author={Jiayu Wang and Yifei Ming and Riya Dulepet and Qinglin Chen and Austin Xu and Zixuan Ke and Frederic Sala and Aws Albarghouthi and Caiming Xiong and Shafiq Joty},
year={2025},
url={https://arxiv.org/abs/2510.14240}
}
- Downloads last month
- 89