Datasets:
Improve dataset card: Add metadata, links, overview, and sample usage
Browse filesThis PR enhances the dataset card for `fMRI-Objaverse` by:
- Adding `task_categories` and relevant `tags` to the metadata for better discoverability.
- Expanding the overview of the dataset based on the paper abstract, providing more context and details.
- Including explicit links to the project page and the associated GitHub repository for further resources.
- Incorporating a "Sample Usage" section from the associated GitHub README to guide users on environment setup, training, and inference.
README.md
CHANGED
|
@@ -1,22 +1,107 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# fMRI-Objaverse
|
| 6 |
|
|
|
|
| 7 |
|
| 8 |
[](https://arxiv.org/abs/2409.11315)
|
| 9 |
|
|
|
|
|
|
|
|
|
|
| 10 |
## Overview
|
| 11 |
|
| 12 |
-
fMRI-Objaverse is an extended dataset for [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
## Citation
|
| 16 |
|
| 17 |
If you find our dataset useful for your research and applications, please cite using this BibTeX:
|
| 18 |
|
| 19 |
-
|
| 20 |
```
|
| 21 |
@misc{gao2024fmri3dcomprehensivedatasetenhancing,
|
| 22 |
title={fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction},
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
tags:
|
| 6 |
+
- fmri
|
| 7 |
+
- neuroscience
|
| 8 |
+
- 3d-reconstruction
|
| 9 |
---
|
| 10 |
|
| 11 |
# fMRI-Objaverse
|
| 12 |
|
| 13 |
+
This repository contains **fMRI-Objaverse**, a comprehensive dataset for fMRI-based 3D reconstruction, as presented in the paper [MinD-3D++: Advancing fMRI-Based 3D Reconstruction with High-Quality Textured Mesh Generation and a Comprehensive Dataset](https://huggingface.co/papers/2409.11315).
|
| 14 |
|
| 15 |
[](https://arxiv.org/abs/2409.11315)
|
| 16 |
|
| 17 |
+
**Project Page**: [https://jianxgao.github.io/MinD-3D](https://jianxgao.github.io/MinD-3D)
|
| 18 |
+
**Code**: [https://github.com/JianxGao/MinD-3D](https://github.com/JianxGao/MinD-3D)
|
| 19 |
+
|
| 20 |
## Overview
|
| 21 |
|
| 22 |
+
fMRI-Objaverse is an extended dataset for [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape). It is part of the larger fMRI-3D dataset, which significantly advances the task of reconstructing 3D visuals from functional Magnetic Resonance Imaging (fMRI) data. The fMRI-3D dataset includes data from 15 participants and showcases a total of 4,768 3D objects.
|
| 23 |
+
|
| 24 |
+
fMRI-Objaverse specifically includes data from 5 subjects, 4 of whom are also part of the core set in fMRI-Shape. Each subject views 3,142 3D objects across 117 categories, all accompanied by text captions. This significantly enhances the diversity and potential applications of the dataset for decoding textured 3D visual information from fMRI signals and generating 3D textured meshes with detailed textures.
|
| 25 |
+
|
| 26 |
+
## Sample Usage
|
| 27 |
+
|
| 28 |
+
This section provides quick instructions on setting up the environment, training models, and performing inference using the associated code from the [MinD-3D GitHub repository](https://github.com/JianxGao/MinD-3D).
|
| 29 |
+
|
| 30 |
+
### Environment Setup
|
| 31 |
+
|
| 32 |
+
To set up the environment for MinD-3D:
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
git clone https://github.com/JianxGao/MinD-3D.git
|
| 36 |
+
cd MinD-3D
|
| 37 |
+
bash env_install.sh
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
For MinD-3D++ specific setup, please refer to the [InstantMesh](https://github.com/TencentARC/InstantMesh) repository for detailed environment setup instructions.
|
| 41 |
+
|
| 42 |
+
### Train
|
| 43 |
+
|
| 44 |
+
Example commands to train models using the MinD-3D framework:
|
| 45 |
+
|
| 46 |
+
**MinD-3D Training:**
|
| 47 |
+
```bash
|
| 48 |
+
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port=25645 \
|
| 49 |
+
train_stage1.py --sub_id 0001 --ddp \
|
| 50 |
+
--config ./configs/mind3d.yaml \
|
| 51 |
+
--out_dir sub01_stage1 --batchsize 8
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
CUDA_VISIBLE_DEVICES=1 python -m torch.distributed.launch --nproc_per_node=1 --master_port=25645 \
|
| 56 |
+
train_stage2.py --sub_id 0001 --ddp \
|
| 57 |
+
--config ./configs/mind3d.yaml \
|
| 58 |
+
--out_dir sub01_stage2 --batchsize 2
|
| 59 |
+
```
|
| 60 |
+
You can access the quantized features for training through the link: https://drive.google.com/file/d/1R8IpG1bligLAfHkLQ2COrfTIkay14AEm/view?usp=drive_link.
|
| 61 |
+
|
| 62 |
+
You can download the weight of subject 1 through the link:
|
| 63 |
+
https://drive.google.com/file/d/1ni4g1iCvdpoi2xYtmydr_w3XA5PpNrvm/view?usp=sharing
|
| 64 |
+
|
| 65 |
+
**MinD-3D++ Training:**
|
| 66 |
+
```bash
|
| 67 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=25644 \
|
| 68 |
+
python train_sd.py --ddp \
|
| 69 |
+
--config ./configs/mind3d_pp.yaml \
|
| 70 |
+
--out_dir mind3dpp_fmri_shape_subject1_rank_64 --batchsize 8
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
### Inference
|
| 74 |
+
|
| 75 |
+
Example commands to run inference with the trained models:
|
| 76 |
|
| 77 |
+
**MinD-3D Inference:**
|
| 78 |
+
```bash
|
| 79 |
+
# Sub01 Plane
|
| 80 |
+
python generate_fmri2shape.py --config ./configs/mind3d.yaml --check_point_path ./mind3d_sub01.pt \
|
| 81 |
+
--uid b5d0ae4f723bce81f119374ee5d5f944 --topk 250
|
| 82 |
+
|
| 83 |
+
# Sub01 Car
|
| 84 |
+
python generate_fmri2shape.py --config ./configs/mind3d.yaml --check_point_path ./mind3d_sub01.pt \
|
| 85 |
+
--uid aebd98c5d7e8150b709ce7955adef61b --topk 250
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
**MinD-3D++ Inference:**
|
| 89 |
+
```bash
|
| 90 |
+
cd InstantMesh # Navigate to the InstantMesh directory for this inference
|
| 91 |
+
|
| 92 |
+
CUDA_VISIBLE_DEVICES=0 python infer_fmri_obj.py ./configs/mind3d_pp_infer.yaml \
|
| 93 |
+
--unet_path model_weight \
|
| 94 |
+
--save_name save_dir \
|
| 95 |
+
--input_path ./dataset/fmri_shape/core_test_list.txt \
|
| 96 |
+
--fmri_dir fmri_dir \
|
| 97 |
+
--gt_image_dir gt_image_dir \
|
| 98 |
+
--save_video --export_texmap
|
| 99 |
+
```
|
| 100 |
|
| 101 |
## Citation
|
| 102 |
|
| 103 |
If you find our dataset useful for your research and applications, please cite using this BibTeX:
|
| 104 |
|
|
|
|
| 105 |
```
|
| 106 |
@misc{gao2024fmri3dcomprehensivedatasetenhancing,
|
| 107 |
title={fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction},
|