|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: "split.csv" |
|
|
--- |
|
|
|
|
|
# SpatialLM Dataset |
|
|
|
|
|
<!-- markdownlint-disable first-line-h1 --> |
|
|
<!-- markdownlint-disable html --> |
|
|
<!-- markdownlint-disable no-duplicate-header --> |
|
|
|
|
|
<div align="center"> |
|
|
<picture> |
|
|
<source srcset="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/_dK14CT3do8rBG3QrHUjN.png" media="(prefers-color-scheme: dark)"> |
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/bAZyeIXOMVASHR6-xVlQU.png" width="60%" alt="SpatialLM""/> |
|
|
</picture> |
|
|
</div> |
|
|
<hr style="margin-top: 0; margin-bottom: 8px;"> |
|
|
<div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;"> |
|
|
<a href="https://manycore-research.github.io/SpatialLM" target="_blank" style="margin: 2px;"><img alt="Project" |
|
|
src="https://img.shields.io/badge/π%20Website-SpatialLM-ffc107?color=42a5f5&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
<a href="https://arxiv.org/abs/2506.07491" target="_blank" style="margin: 2px;"><img alt="arXiv" |
|
|
src="https://img.shields.io/badge/arXiv-Techreport-b31b1b?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
<a href="https://github.com/manycore-research/SpatialLM" target="_blank" style="margin: 2px;"><img alt="GitHub" |
|
|
src="https://img.shields.io/badge/GitHub-SpatialLM-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
</div> |
|
|
<div align="center" style="line-height: 1;"> |
|
|
<a href="https://huggingface.co/manycore-research/SpatialLM1.1-Qwen-0.5B" target="_blank" style="margin: 2px;"><img alt="Hugging Face" |
|
|
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialLM-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Dataset" target="_blank" style="margin: 2px;"><img alt="Dataset" |
|
|
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Dataset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Testset" target="_blank" style="margin: 2px;"><img alt="Dataset" |
|
|
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Testset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a> |
|
|
</div> |
|
|
|
|
|
The SpatialLM dataset is a large-scale, high-quality synthetic dataset designed by professional 3D designers and used for real-world production. It contains point clouds from 12,328 diverse indoor scenes comprising 54,778 rooms, each paired with rich ground-truth 3D annotations. SpatialLM dataset provides an additional valuable resource for advancing research in indoor scene understanding, 3D perception, and related applications. For more details about the dataset construction, annotations, and benchmark tasks, please refer to the [paper](https://arxiv.org/abs/2506.07491). |
|
|
|
|
|
<table style="table-layout: fixed;"> |
|
|
<tr> |
|
|
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/YFQzBUC_sGufXqpGL6YhV.jpeg" alt="exmaple a" width="100%" style="display: block;"></td> |
|
|
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/jRbPzBwhtDMWUwueodYax.jpeg" alt="exmaple c" width="100%" style="display: block;"></td> |
|
|
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/DpNKunoD-2-1spx6cXDxa.jpeg" alt="exmaple b" width="100%" style="display: block;"></td> |
|
|
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/o-JgD-oY0oK0yhryWUexv.jpeg" alt="exmaple d" width="100%" style="display: block;"></td> |
|
|
</tr> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is organized into the following folder structure: |
|
|
|
|
|
```bash |
|
|
SpatialLM-Dataset/ |
|
|
βββ pcd/ # Point cloud PLY files for rooms |
|
|
β βββ .ply |
|
|
βββ layout/ # GT room layout |
|
|
β βββ .txt |
|
|
βββ examples/ # 10 point cloud and layout examples |
|
|
β βββ .ply |
|
|
β βββ .txt |
|
|
βββ extract.sh # Extraction script |
|
|
βββ dataset_info.json # Dataset configuration file for training |
|
|
βββ spatiallm_train.json # SpatialLM conversations data for training |
|
|
βββ spatiallm_val.json # SpatialLM conversations data for validation |
|
|
βββ spatiallm_test.json # SpatialLM conversations data for testing |
|
|
βββ split.csv # Metadata CSV file |
|
|
``` |
|
|
|
|
|
## Metadata |
|
|
|
|
|
The dataset metadata is provided in the `split.csv` file with the following columns: |
|
|
|
|
|
- **id**: Unique identifier for each sampled point cloud and layout following the naming convention `{scene_id}_{room_id}_{sample}` (e.g., `scene_001523_00_2`) |
|
|
- **room_type**: The functional type of each room (e.g., bedroom, living room) |
|
|
- **scene_id**: Unique identifier for multi-room apartment scenes |
|
|
- **room_id**: Unique identifier for individual rooms within a scene |
|
|
- **sample**: Point cloud sampling configuration for each room (4 types available): |
|
|
- **0**: Most complete observations (8 panoramic views randomly sampled) |
|
|
- **1**: Most sparse observations (8 perspective views randomly sampled) |
|
|
- **2**: Less complete observations (16 perspective views randomly sampled) |
|
|
- **3**: Less sparse observations (24 perspective views randomly sampled) |
|
|
- **split**: Dataset partition assignment (`train`, `val`, `test`, `reserved`) |
|
|
|
|
|
The dataset is divided into 11,328/500/500 scenes for train/val/test splits, and 199,286/500/500 sampled point clouds accordingly, where multiple point cloud samples of the same room are randomly selected for the val/test splits for simplicity. |
|
|
|
|
|
## Data Extraction |
|
|
|
|
|
Point clouds and layouts are compressed in zip files. To extract the files, run the following script: |
|
|
|
|
|
```bash |
|
|
cd SpatialLM-Dataset |
|
|
chmod +x extract.sh |
|
|
./extract.sh |
|
|
``` |
|
|
|
|
|
## Conversation Format |
|
|
|
|
|
The `spatiallm_train.json`, `spatiallm_val.json`, and `spatiallm_test.json` data follows the **SpatialLM format** with ShareGPT-style conversations: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"conversations": [ |
|
|
{ |
|
|
"from": "human", |
|
|
"value": "<point_cloud>Detect walls, doors, windows, boxes. The reference code is as followed: ..." |
|
|
}, |
|
|
{ |
|
|
"from": "gpt", |
|
|
"value": "<|layout_s|>wall_0=...<|layout_e|>" |
|
|
} |
|
|
], |
|
|
"point_clouds": ["pcd/ID.ply"] |
|
|
} |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
Use the [SpatialLM code base](https://github.com/manycore-research/SpatialLM/tree/main) for reading the point cloud and the layout data. |
|
|
|
|
|
```python |
|
|
from spatiallm import Layout |
|
|
from spatiallm.pcd import load_o3d_pcd |
|
|
|
|
|
# Load Point Cloud |
|
|
point_cloud = load_o3d_pcd(args.point_cloud) |
|
|
|
|
|
# Load Layout |
|
|
with open(args.layout, "r") as f: |
|
|
layout_content = f.read() |
|
|
layout = Layout(layout_content) |
|
|
``` |
|
|
|
|
|
## Visualization |
|
|
|
|
|
Use `rerun` to visualize the point cloud and the GT structured 3D layout output: |
|
|
|
|
|
```bash |
|
|
python visualize.py --point_cloud examples/scene_008456_00_3.ply --layout examples/scene_008456_00_3.txt --save scene_008456_00_3.rrd |
|
|
rerun scene_008456_00_3.rrd |
|
|
``` |
|
|
|
|
|
## SpatialGen dataset |
|
|
|
|
|
For access to photorealistic RGB/Depth/Normal/Semantic/Instance panoramic renderings and camera trajectories used to generate the SpatialLM point clouds, please refer to the [SpatialGen project](https://manycore-research.github.io/SpatialGen) for more details. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find this work useful, please consider citing: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{SpatialLM, |
|
|
title = {SpatialLM: Training Large Language Models for Structured Indoor Modeling}, |
|
|
author = {Mao, Yongsen and Zhong, Junhao and Fang, Chuan and Zheng, Jia and Tang, Rui and Zhu, Hao and Tan, Ping and Zhou, Zihan}, |
|
|
booktitle = {Advances in Neural Information Processing Systems}, |
|
|
year = {2025} |
|
|
} |
|
|
``` |
|
|
|