| | --- |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: scene1 |
| | path: data/scene1-* |
| | - split: scene2 |
| | path: data/scene2-* |
| | - split: scene3 |
| | path: data/scene3-* |
| | - split: scene4 |
| | path: data/scene4-* |
| | - split: fall |
| | path: data/fall-* |
| | - split: refraction |
| | path: data/refraction-* |
| | - split: slope |
| | path: data/slope-* |
| | - split: spring |
| | path: data/spring-* |
| | dataset_info: |
| | features: |
| | - name: image |
| | dtype: image |
| | - name: render_path |
| | dtype: string |
| | - name: metavalue |
| | dtype: string |
| | splits: |
| | - name: scene1 |
| | num_examples: 11736 |
| | num_bytes: 19942310778.0 |
| | - name: scene2 |
| | num_examples: 11736 |
| | num_bytes: 17009899490.0 |
| | - name: scene3 |
| | num_examples: 11736 |
| | num_bytes: 22456754445.0 |
| | - name: scene4 |
| | num_examples: 3556 |
| | num_bytes: 22976022064.0 |
| | - name: fall |
| | num_examples: 40000 |
| | num_bytes: 10915924301.0 |
| | - name: refraction |
| | num_examples: 40000 |
| | num_bytes: 10709791288.0 |
| | - name: slope |
| | num_examples: 40000 |
| | num_bytes: 16693093236.0 |
| | - name: spring |
| | num_examples: 40000 |
| | num_bytes: 15431950241.0 |
| | download_size: 136135745843.0 |
| | dataset_size: 136135745843.0 |
| | license: apache-2.0 |
| | task_categories: |
| | - image-feature-extraction |
| | - object-detection |
| | - video-classification |
| | language: |
| | - en |
| | tags: |
| | - causal-representation-learning |
| | - simulation |
| | - robotics |
| | - traffic |
| | - physics |
| | - synthetic |
| | --- |
| | |
| | # CausalVerse Image Dataset |
| |
|
| | This dataset contains **two families of splits**: |
| |
|
| | - **Physics splits**: `Fall`, `Refraction`, `Slope`, `Spring` |
| | - **Static image generation**: `scene1`, `scene2`, `scene3`, `scene4` |
| |
|
| | All splits share the same columns: |
| | - `image` (binary image; `datasets.Image`) |
| | - `render_path` (string; original image filename/path) |
| | - `metavalue` (string; per-sample metadata; schema varies by split) |
| |
|
| | **Paper:** [CausalVerse: Benchmarking Causal Representation Learning with Configurable High-Fidelity Simulations](https://huggingface.co/papers/2510.14049) |
| | **Project page:** [https://causal-verse.github.io/](https://causal-verse.github.io/) |
| | **Code:** [https://github.com/CausalVerse/CausalVerseBenchmark](https://github.com/CausalVerse/CausalVerseBenchmark) |
| |
|
| | ## Overview |
| |
|
| | <p align="center"> <img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_intro.png?raw=true" alt="CausalVerse Overview Figure" width="85%"> |
| |
|
| | **CausalVerse** is a comprehensive benchmark for **Causal Representation Learning (CRL)** focused on *recovering the data-generating process*. It couples **high-fidelity, controllable simulations** with **accessible and configurable ground-truth causal mechanisms** (structure, variables, interventions, temporal dependencies), bridging the gap between **realism** and **evaluation rigor**. |
| |
|
| | The benchmark spans **24 sub-scenes** across **four domains**: |
| | - 🖼️ Static image generation |
| | - 🧪 Dynamic physical simulation |
| | - 🤖 Robotic manipulation |
| | - 🚦 Traffic scene analysis |
| |
|
| | Scenarios range from **static to temporal**, **single to multi-agent**, and **simple to complex** structures, enabling principled stress-tests of CRL assumptions. We also include reproducible baselines to help practitioners align **assumptions ↔ data ↔ methods** and deploy CRL effectively. |
| |
|
| | ## Dataset at a Glance |
| |
|
| | <p align="center"> |
| | <img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_overall.png?raw=true" alt="CausalVerse Overview Figure" width="45%"> |
| | <img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_pie.png?raw=true" alt="CausalVerse data info Figure" width="49.4%"> |
| | </p> |
| |
|
| | - **Scale & Coverage**: ≈ **200k** high-res images, ≈ **140k** videos, **>300M** frames across **24 scenes** in **4 domains** |
| | - Image generation (4), Physical simulation (10; aggregated & dynamic), Robotic manipulation (5), Traffic (5) |
| | - **Resolution & Duration**: typical **1024×1024** / **1920×1080**; clips **3–32 s**; diverse frame rates |
| | - **Causal Variables**: **3–100+** per scene, including **categorical** (e.g., object/material types) and **continuous** (e.g., velocity, mass, positions). Temporal scenes combine **global invariants** (e.g., mass) with **time-evolving variables** (e.g., pose, momentum). |
| |
|
| | ## Sizes (from repository files) |
| | - `scene1`: 11,736 examples — ~19.94 GB |
| | - `scene2`: 11,736 examples — ~17.01 GB |
| | - `scene3`: 11,736 examples — ~22.46 GB |
| | - `scene4`: 3,556 examples — ~22.98 GB |
| | - `fall`: 40,000 examples — ~10.92 GB |
| | - `refraction`: 40,000 examples — ~10.71 GB |
| | - `slope`: 40,000 examples — ~16.69 GB |
| | - `spring`: 40,000 examples — ~15.43 GB |
| |
|
| | > Notes: |
| | > - `metavalue` is **split-specific** (e.g., `fall` uses keys like `id,h1,r,u,h2,view`, while `scene*` have attributes like `domain,age,gender,...`). |
| | > - If you only need a portion, consider slicing (e.g., `split="fall[:1000]"`) or streaming to reduce local footprint. |
| | |
| | ## Sample Usage |
| | |
| | ### Loading with `datasets` library |
| | |
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Physics split |
| | ds_fall = load_dataset("CausalVerse/CausalVerse_Image", split="fall") |
| | |
| | # Scene split |
| | ds_s1 = load_dataset("CausalVerse/CausalVerse_Image", split="scene1") |
| | ``` |
| | |
| | ### Using the Image Dataset (PyTorch-ready) |
| | |
| | We provide a **reference PyTorch dataset/loader** that works with exported splits. |
| | |
| | * Core class: `dataset/dataset_multisplit.py` → `MultiSplitImageCSVDataset` |
| | * Builder: `build_dataloader(...)` |
| | * Minimal example: `dataset/quickstart.py` |
| |
|
| | **Conventions** |
| |
|
| | * Each split folder contains `<SPLIT>.csv` + `.png` files |
| | * CSV must include **`render_path`** (relative to the repository root or chosen data root) |
| | * All remaining CSV columns are treated as **metadata** and packed into a float tensor `meta` |
| | |
| | **Quick example** |
| | |
| | ```python |
| | from dataset.dataset_multisplit import build_dataloader |
| | # Optional torchvision transforms: |
| | # import torchvision.transforms as T |
| | # tfm = T.Compose([T.Resize((256, 256)), T.ToTensor()]) |
| | |
| | loader, ds = build_dataloader( |
| | root="/path/to/causalverse", |
| | split="SCENE1", |
| | batch_size=16, |
| | shuffle=True, |
| | num_workers=4, |
| | pad_images=True, # zero-pads within a batch if resolutions differ |
| | # image_transform=tfm, |
| | # check_files=True, |
| | ) |
| | |
| | for images, meta in loader: |
| | # images: FloatTensor [B, C, H, W] in [0, 1] |
| | # meta : FloatTensor [B, D] with ordered metadata (including 'view' if present) |
| | ... |
| | ``` |
| | |
| | > **`view` column semantics**: |
| | > • Physical splits (e.g., FALL/REFRACTION/SLOPE/SPRING): **camera viewpoint** |
| | > • Human rendering splits (SCENE1–SCENE4): **indoor background type** |
| | |
| | ## Installation |
| | |
| | ```bash |
| | # 1) Clone |
| | git clone https://github.com/CausalVerse/CausalVerseBenchmark.git |
| | cd CausalVerseBenchmark |
| | |
| | # 2) Core environment |
| | python3 --version # >= 3.9 recommended |
| | pip install -U torch datasets huggingface_hub pillow tqdm |
| | |
| | # 3) Optional: examples / loaders / transforms |
| | pip install torchvision scikit-learn rich |
| | ``` |
| | |
| | ## Download & Convert (Image subset) |
| | |
| | Fetch the **image** portion from Hugging Face and export to a simple on-disk layout (PNG files + per-split CSVs). |
| |
|
| | **Quick start (recommended)** |
| |
|
| | ```bash |
| | chmod +x dataset/run_export.sh |
| | ./dataset/run_export.sh |
| | ``` |
| |
|
| | This will: |
| |
|
| | * download parquet shards (skip if local), |
| | * export images to `image/<SPLIT>/*.png`, |
| | * write `<SPLIT>.csv` next to each split with metadata columns + a `render_path` column. |
| |
|
| | **Output layout** |
| |
|
| | ``` |
| | image/ |
| | FALL/ |
| | FALL.csv |
| | 000001.png |
| | ... |
| | SCENE1/ |
| | SCENE1.csv |
| | char_001.png |
| | ... |
| | ``` |
| |
|
| | <details> |
| | <summary><b>Custom CLI usage</b></summary> |
| |
|
| | ```bash |
| | python dataset/export_causalverse_image.py \ |
| | --repo-id CausalVerse/CausalVerse_Image \ |
| | --hf-home ./.hf \ |
| | --raw-repo-dir ./CausalVerse_Image \ |
| | --image-root ./image \ |
| | --folder-case upper \ |
| | --no-overwrite \ |
| | --include-render-path-column \ |
| | --download-allow-patterns data/*.parquet \ |
| | --skip-download-if-local |
| | |
| | # Export specific splits (case-insensitive) |
| | python dataset/export_causalverse_image.py --splits FALL SCENE1 |
| | ``` |
| |
|
| | </details> |
| |
|
| | ## Evaluation (Image Part) |
| |
|
| | We release four reproducible baselines (shared backbone & similar training loop for fair comparison): |
| |
|
| | * `CRL_SC` — Sufficient Change |
| | * `CRL_SF` — Mechanism Sparsity |
| | * `CRL_SP` — Multi-view |
| | * `SUP` — Supervised upper bound |
| |
|
| | **How to run** |
| |
|
| | ```bash |
| | # From repo root, run each baseline: |
| | cd evaluation/image_part/CRL_SC && python main.py |
| | cd ../CRL_SF && python main.py |
| | cd ../CRL_SP && python main.py |
| | cd ../SUP && python main.py |
| | |
| | # Example: pass data root via env or args |
| | # DATA_ROOT=/path/to/causalverse python main.py |
| | ``` |
| |
|
| | **Full comparison (MCC / R²)** |
| |
|
| | | Algorithm | Ball on the Slope<br><sub>MCC / R²</sub> | Cylinder Spring<br><sub>MCC / R²</sub> | Light Refraction<br><sub>MCC / R²</sub> | Avg<br><sub>MCC / R²</sub> | |
| | |---|---:|---:|---:|---:| |
| | | **Supervised** | 0.9878 / 0.9962 | 0.9970 / 0.9910 | 0.9900 / 0.9800 | **0.9916 / 0.9891** | |
| | | **Sufficient Change** | 0.4434 / 0.9630 | 0.6092 / 0.9344 | 0.6778 / 0.8420 | 0.5768 / 0.9131 | |
| | | **Mechanism Sparsity** | 0.2491 / 0.3242 | 0.3353 / 0.2340 | 0.1836 / 0.4067 | 0.2560 / 0.3216 | |
| | | **Multiview** | 0.4109 / 0.9658 | 0.4523 / 0.7841 | 0.3363 / 0.7841 | 0.3998 / 0.8447 | |
| | | **Contrastive Learning** | 0.2853 / 0.9604 | 0.6342 / 0.9920 | 0.3773 / 0.9677 | 0.4323 / 0.9734 | |
| |
|
| |
|
| | > Ablations can be reproduced by editing each method’s `main.py` or adding configs (e.g., split selection, loss weights, target subsets). |
| |
|
| | ## Acknowledgements |
| |
|
| | We thank the open-source community and the simulation/rendering ecosystem. We also appreciate contributors who help improve CausalVerse through issues and pull requests. |
| |
|
| | ## Citation |
| |
|
| | If CausalVerse helps your research, please cite: |
| |
|
| | ```bibtex |
| | @inproceedings{causalverse2025, |
| | title = {CausalVerse: Benchmarking Causal Representation Learning with Configurable High-Fidelity Simulations}, |
| | author = {Guangyi Chen and Yunlong Deng and Peiyuan Zhu and Yan Li and Yifan Shen and Zijian Li and Kun Zhang}, |
| | booktitle = {NeurIPS}, |
| | year = {2025}, |
| | note = {Spotlight}, |
| | url = {https://huggingface.co/CausalVerse} |
| | } |
| | ``` |