PartEdit-Bench / README.md
Aleksandar's picture
Update README.md
508a6a7 verified
metadata
license: creativeml-openrail-m
configs:
  - config_name: default
    data_files:
      - split: synth
        path: data/synth-*
      - split: real
        path: data/real-*
dataset_info:
  features:
    - name: id
      dtype: int32
    - name: original_image
      dtype: image
    - name: partedit
      dtype: image
    - name: subject
      dtype: string
    - name: edit
      dtype: string
    - name: part
      dtype: string
    - name: gt_mask
      dtype: image
    - name: class_name
      dtype: string
    - name: prompt_original
      dtype: string
    - name: prompt_changed
      dtype: string
    - name: p2p_prompt
      dtype: string
    - name: p2p_template
      dtype: string
    - name: instructp2p_edit1
      dtype: string
    - name: instructp2p_edit2
      dtype: string
    - name: instructp2p_edit3
      dtype: string
    - name: seed
      dtype: int32
  splits:
    - name: synth
      num_bytes: 159677179
      num_examples: 60
    - name: real
      num_bytes: 9967718
      num_examples: 13
  download_size: 169623238
  dataset_size: 169644897
task_categories:
  - text-to-image
  - image-to-image
language:
  - en
tags:
  - Part Editing
  - image
  - Editing
size_categories:
  - n<1K
arxiv: 2502.0405
pretty_name: PartEdit

Paper Project Page 🎨 SIGGRAPH 2025

Dataset Card for Dataset Name

This benchmark is part of PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models accepted in Siggraph 2025.

Dataset Details

Dataset Description

Small benchmark of part editing.

  • Curated by: Authors
  • Funded by [optional]: KAUST
  • Shared by [optional]: Authors
  • Language(s) (NLP): EN
  • License: creativeml-openrail-m

Dataset Sources [optional]

Annotation process

Using https://www.makesense.ai/ to annotate ground truth regions.

Bias, Risks, and Limitations

The generated images might contain biases from the underlying models.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

BibTeX:

@inproceedings{cvejic2025partedit,
  title={PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models},
  author={Cvejic, Aleksandar and Eldesokey, Abdelrahman and Wonka, Peter},
  booktitle={Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
  pages={1--11},
  year={2025}
}

APA:

Cvejic, A., Eldesokey, A., & Wonka, P. (2025, August). PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers (pp. 1-11).