Datasets:
metadata
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: image_name
dtype: string
splits:
- name: small
num_bytes: 694935
num_examples: 50
- name: large
num_bytes: 4651568
num_examples: 300
download_size: 10233742
dataset_size: 5346503
configs:
- config_name: default
data_files:
- split: large
path: data/large-*
- split: small
path: data/small-*
task_categories:
- image-text-to-text
RotBench
Data for RotBench: Evaluating Multimodal Large Language Models on Identifying Image Rotation.
Dataset Summary
RotBench is a benchmark for evaluating whether multimodal large language models (MLLMs) can identify image orientation. It contains 350 manually filtered images. The dataset includes two subsets:
- Large: 300 images
- Small: 50 images
All images were drawn from the Spatial-MM dataset and passed a two-stage human verification process to ensure rotations are distinguishable.
Dataset Download
from datasets import load_dataset
dataset = load_dataset("tianyin/RotBench")
data = dataset['large'] # or dataset['small']
for i, sample in enumerate(data):
image = sample['image'] # PIL Image object
image_name = sample['image_name']
Citation
If you find our data useful in your research, please cite the following paper:
@misc{niu2025rotbenchevaluatingmultimodallarge,
title={RotBench: Evaluating Multimodal Large Language Models on Identifying Image Rotation},
author={Tianyi Niu and Jaemin Cho and Elias Stengel-Eskin and Mohit Bansal},
year={2025},
eprint={2508.13968},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.13968},
}