Datasets:
language:
- en
task_categories:
- video-text-to-text
tags:
- video-qa
- agentic-reasoning
- deep-research
- video-reasoning
- agentic-ai
VideoDR: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning
VideoDR is the first video deep research benchmark, designed to evaluate the capability of Multimodal Large Language Models (MLLMs) to perform complex reasoning based on video content while leveraging the Open Web.
In real-world video question answering scenarios, videos often provide only localized visual cues, while verifiable answers are distributed across the open web. VideoDR requires models to jointly perform cross-frame clue extraction, iterative retrieval, and multi-hop reasoning-based verification.
Core Capabilities
VideoDR evaluates agents based on three core pillars:
- 🎞️ Multi-frame Visual Cues: Accurately identifying continuous key information from multiple video frames.
- 🌍 Interactive Search: Interacting with a browser environment to perform multi-hop deep search.
- 🧩 Evidence Synthesis: Combining video clues and web evidence to provide a verifiable factual answer.
The benchmark spans six semantic domains and includes high-quality samples obtained through rigorous human annotation and quality control.
Failure Analysis
The official repository provides an LLM-based failure analysis tool (llm_as_judge) to automatically classify failure cases into different error categories based on trace analysis. This helps in identifying core bottlenecks such as goal drift and long-horizon consistency in video agents.
Citation
If you find this benchmark useful for your research, please cite:
@article{liu2026watching,
title={Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning},
author={Liu, Chengwen and Yu, Xiaomin and Chang, Zhuoyue and Huang, Zhe and Zhang, Shuo and Lian, Heng and Wang, Kunyi and Xu, Rui and Hu, Sen and Hou, Jianheng and others},
journal={arXiv preprint arXiv:2601.06943},
year={2026}
}