File size: 3,783 Bytes
42229cc c14e918 509e03e c14e918 42229cc 9582923 42229cc ae480bf 42229cc ae480bf 9befd4e 68b1470 ae480bf 680a0ca ae480bf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | ---
license: cc-by-nc-4.0
---
# R2SM Dataset
<img src="https://cdn-uploads.huggingface.co/production/uploads/68231c1c344454ad3d607d95/nYzy-uI2YLR93ll1Bc3Mo.jpeg" alt="R2SM Dataset Image" width="600"/>
## Dataset preparation
### 1. COCOA-cls
Download COCO images (`2014 Train images`, `2014 Val images`) from [here](https://cocodataset.org/#download).
### 2. D2SA
Download D2SA images from [here](https://www.mvtec.com/company/research/datasets/mvtec-d2s/).
### 3. MUVA
Download MUVA images from [here](https://drive.google.com/drive/folders/1T5PNhoWlXBFDwGteVi3x357adM1t2mlo).
## Dataset format
Each split (`cocoa-cls_split`, `d2sa_split`, `muva_split`) follows the gRefCOCO format and contains three files:
1. `instances.json`
* Contains all instance annotations
* **Mask format**: RLE
* **Bounding box format**: [x, y, width, height]
2. `queries_amodal.json`
* Includes amodal queries only (as mentioned in the main paper).
* Each entry links a query to the referred objects.
5. `queries_all.json`
* Includes both amodal and modal queries.
* Each entry links queries to the referred objects.
## Source Attribution
The R2SM dataset is constructed using images and annotations adapted from the following publicly available datasets:
- **COCOA-cls** and **D2SA**: From *Learning to See the Invisible: End-to-End Trainable Amodal Instance Segmentation (WACV 2019)* by Follmann et al.
- **MUVA**: From *MUVA: A New Large-Scale Benchmark for Multi-view Amodal Instance Segmentation in the Shopping Scenario (ICCV 2023)* by Li et al.
- Licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
- License link: https://creativecommons.org/licenses/by-nc/4.0/
All images and annotations are originally released under non-commercial academic licenses, and R2SM is released under the same usage restriction.
Please refer to the original datasets for full details.
## Citations
If you use R2SM, please also cite the original sources:
```bibtex
@inproceedings{FollmannKHKB19,
author = {Patrick Follmann and Rebecca K{\"{o}}nig and Philipp H{\"{a}}rtinger and Michael Klostermann and Tobias B{\"{o}}ttger},
title = {Learning to See the Invisible: End-to-End Trainable Amodal Instance Segmentation},
booktitle = {{IEEE} Winter Conference on Applications of Computer Vision, {WACV} 2019, Waikoloa Village, HI, USA, January 7-11, 2019},
year = {2019}
}
@inproceedings{FollmannBHKU18,
author = {Patrick Follmann and Tobias B{\"{o}}ttger and Philipp H{\"{a}}rtinger and Rebecca K{\"{o}}nig and Markus Ulrich},
editor = {Vittorio Ferrari and Martial Hebert and Cristian Sminchisescu and Yair Weiss},
title = {MVTec {D2S:} Densely Segmented Supermarket Dataset},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part {X}},
series = {Lecture Notes in Computer Science},
year = {2018}
}
@inproceedings{ZhuTMD17,
author = {Yan Zhu and Yuandong Tian and Dimitris N. Metaxas and Piotr Doll{\'{a}}r},
title = {Semantic Amodal Segmentation},
booktitle = {2017 {IEEE} Conference on Computer Vision and Pattern Recognition,
{CVPR} 2017, Honolulu, HI, USA, July 21-26, 2017},
year = {2017}
}
@inproceedings{LiYTBZJ023,
author = {Zhixuan Li and Weining Ye and Juan Terven and Zachary Bennett and Ying Zheng and Tingting Jiang and Tiejun Huang},
title = {{MUVA:} {A} New Large-Scale Benchmark for Multi-view Amodal Instance Segmentation in the Shopping Scenario},
booktitle = {{IEEE/CVF} International Conference on Computer Vision, {ICCV} 2023, Paris, France, October 1-6, 2023},
year = {2023}
}
|