Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,89 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# VisualSimpleQA
|
| 2 |
+
|
| 3 |
+
## Introduction
|
| 4 |
+
VisualSimpleQA is a multimodal fact-seeking benchmark with two key features. First, it enables streamlined and decoupled evaluation of LVLMs in visual and linguistic modalities. Second, it incorporates well-defined difficulty criteria to guide human annotation and facilitates the extraction of a challenging subset, VisualSimpleQA-hard.
|
| 5 |
+
Experiments on 15 LVLMs show that even state-of-the-art models such as GPT-4o achieve merely 60%+ correctness in multimodal fact-seeking QA on VisualSimpleQA and 30%+ on VisualSimpleQA-hard.
|
| 6 |
+
Furthermore, the decoupled evaluation based on this benchmark across different models highlights substantial opportunities for improvement in both visual and linguistic modules.
|
| 7 |
+
|
| 8 |
+
## Structure
|
| 9 |
+
|
| 10 |
+
`original_image/`
|
| 11 |
+
This directory contains all image files, where each filename follows the format `original_image_{ID}.png`, matching the unique ID of the corresponding sample in VisualSimpleQA.
|
| 12 |
+
|
| 13 |
+
`cropped_image/`
|
| 14 |
+
This directory contains all cropped rationales from the original images. Each filename follows the format `cropped_image_{ID}.painting`, matching the unique ID of the corresponding sample in VisualSimpleQA.
|
| 15 |
+
|
| 16 |
+
`data.json`
|
| 17 |
+
This JSON file provides detailed information about each sample.
|
| 18 |
+
|
| 19 |
+
`hard_data.json`
|
| 20 |
+
This JSON file provides detailed information about each hard sample in the following format:
|
| 21 |
+
|
| 22 |
+
**Example:**
|
| 23 |
+
```json
|
| 24 |
+
{
|
| 25 |
+
"id": 369,
|
| 26 |
+
"multimodal_question": "Which institution did the creator of this cartoon duck donate her natural science-related paintings to?",
|
| 27 |
+
"answer": "The Armitt Museum, Gallery, Library",
|
| 28 |
+
"rationale": "Jemima Puddle-Duck",
|
| 29 |
+
"text_only_question": "Which institution did the creator of Jemima Puddle-Duck donate her natural science-related paintings to?",
|
| 30 |
+
"image_source": "https://www.gutenberg.org/files/14814/14814-h/images/15-tb.jpg",
|
| 31 |
+
"evidence": "https://www.armitt.com/beatrix-potter-exhibition/\nhttps://en.wikipedia.org/wiki/Beatrix_Potter",
|
| 32 |
+
"resolution": "400x360",
|
| 33 |
+
"proportion_of_roi": 0.2232,
|
| 34 |
+
"category": "academic and education",
|
| 35 |
+
"text_in_image": "absence",
|
| 36 |
+
"rationale_granularity": "fine-grained"
|
| 37 |
+
}
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Usage
|
| 43 |
+
1. Ensure the `data.json` file is in the same directory as the script.
|
| 44 |
+
2. Run the script to randomly select and display a sample.
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
import json
|
| 48 |
+
import random
|
| 49 |
+
|
| 50 |
+
with open('./data.json', 'r', encoding='utf-8') as file:
|
| 51 |
+
data = json.load(file)
|
| 52 |
+
|
| 53 |
+
# Randomly select one sample
|
| 54 |
+
random_sample = random.choice(data)
|
| 55 |
+
image_id = random_sample.get('id')
|
| 56 |
+
image_path = f'./original_image/original_image_{image_id}.png'
|
| 57 |
+
multimodal_question = random_sample.get('multimodal_question')
|
| 58 |
+
text_only_question = random_sample.get('text_only_question')
|
| 59 |
+
answer = random_sample.get('answer')
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Disclaimer
|
| 63 |
+
|
| 64 |
+
This dataset contains images collected from various sources. The authors do NOT claim ownership or copyright over the images. The images may be subject to third-party rights, and users are solely responsible for verifying the legal status of any content before use.
|
| 65 |
+
|
| 66 |
+
- Intended Use: The images are provided for non-commercial research purposes only.
|
| 67 |
+
|
| 68 |
+
- Redistribution Prohibition: You may NOT redistribute or modify the images without permission from original rights holders.
|
| 69 |
+
|
| 70 |
+
- Reporting Violations: If you encounter any sample potentially breaching copyright or licensing rules, contact us at [email protected]. Verified violations will be removed promptly.
|
| 71 |
+
|
| 72 |
+
The authors disclaim all liability for copyright infringement or misuse arising from the use of this dataset. Users assume full legal responsibility for their actions.
|
| 73 |
+
|
| 74 |
+
## License
|
| 75 |
+
|
| 76 |
+
- Text Data: Licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
|
| 77 |
+
- Images: Subject to custom terms (see Disclaimer above).
|
| 78 |
+
|
| 79 |
+
## Citation
|
| 80 |
+
|
| 81 |
+
**BibTeX:**
|
| 82 |
+
```bibtex
|
| 83 |
+
@article{wang2025visualsimpleqa,
|
| 84 |
+
title={VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering},
|
| 85 |
+
author={Yanling Wang and Yihan Zhao and Xiaodong Chen and Shasha Guo and Lixin Liu and Haoyang Li and Yong Xiao and Jing Zhang and Qi Li and Ke Xu},
|
| 86 |
+
journal={arXiv preprint arXiv:},
|
| 87 |
+
year={2025}
|
| 88 |
+
}
|
| 89 |
+
```
|