File size: 5,817 Bytes
ff6c7d1 cb283ac ff6c7d1 7a7e889 ff6c7d1 664fb0c 7a7e889 664fb0c 7a7e889 664fb0c 7a7e889 ff6c7d1 7a7e889 664fb0c f3f88ab 7a7e889 664fb0c f3f88ab 7a7e889 ff6c7d1 7a7e889 664fb0c f3f88ab 7a7e889 664fb0c f3f88ab 7a7e889 ff6c7d1 7a7e889 ff6c7d1 cb283ac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
license: cc-by-nc-4.0
tags:
- creative
- creativity
---
<p align="center">
<img src="figs/favicon.svg" width="150">
</p>
<div align="center">
<h1>Evaluating Text Creativity across Diverse Domains: A Dataset and a Large Language Model Evaluator</h1>
<a href="https://creval-creative-evaluation.github.io/"><img src="https://img.shields.io/badge/Project%20Page-666?logo=googledocs&logoColor=FFE165&style=for-the-badge" alt="homepage"></a>
<a href="https://arxiv.org/pdf/2505.19236"><img src="https://img.shields.io/badge/arXiv%20paper-666?logo=arxiv&logoColor=FFE165&style=for-the-badge" alt="arXiv"></a>
<br/>
<a href="https://huggingface.co/datasets/Aman/CreataSet"><img src="https://img.shields.io/badge/CreataSet-dataset-blue?logo=databricks&logoColor=white&style=for-the-badge" alt="arXiv"></a>
<a href="https://huggingface.co/Aman/CrEval-7b"><img src="https://img.shields.io/badge/model-7b-purple?logo=huggingface&logoColor=yellow&style=for-the-badge" alt="arXiv"></a>
<a href="https://huggingface.co/Aman/CrEval-14b"><img src="https://img.shields.io/badge/model-14b-purple?logo=huggingface&logoColor=yellow&style=for-the-badge" alt="arXiv"></a>
<a href="https://github.com/Aman-4-Real/CrEval"><img src="https://img.shields.io/badge/github-code-black?logo=github&logoColor=white&style=for-the-badge" alt="arXiv"></a>
<br/>
<hr>
</div>
**Paper link: [https://arxiv.org/pdf/2505.19236](https://arxiv.org/pdf/2505.19236)**
**Project Page: [https://creval-creative-evaluation.github.io/](https://creval-creative-evaluation.github.io/)**
<h2> Please cite our paper if you find our work useful. </h2>
<hr>
### Dataset Information
- **[CreataSet-ext_112965.jsonl](https://huggingface.co/datasets/Aman/CreataSet/blob/main/CreataSet-ext_112965.jsonl)**:
This file contains all the data of CreataSet-Ext (train & validation). The keys of each sample are:
```
{
"source": str, # the source of the sample
"title": str, # the title of the output ("" if N/A)
"instruction": str,
"output": str, # the answer (human-level) to the instruction
"gen_resp_order": ["MiniCPM-2B-n", "MiniCPM-2B-c", "Qwen2.5-14B-n", "Qwen2.5-14B-c"],
# the id order of the following keys "gen_resp_1", ... ("-n"=Prompt_ordinary, "-c"=Prompt_creative)
"gen_resp_*": str, # the sythetic responses using different models and prompts
"gen_resp_minicpm_tie_cleaned": list, # the sythetic tie responses using MiniCPM-2B
"gen_resp_qwen_tie_cleaned": list, # the sythetic tie responses using Qwen2.5-14B
"domain": str
}
```
- **[CreataSet-test_with_labeling_400.jsonl](https://huggingface.co/datasets/Aman/CreataSet/blob/main/CreataSet-test_with_labeling_400.jsonl)**:
This file contains 400 distinct test samples along with their creativity scores from 30 annotators.
```
{
# keys the same as the above
"avg_score": list, # the average rating scores of [output, gen_resp_*]
"labeling": list, # the rating scores of all the 30 annotators [output, gen_resp_*]
}
```
- **[train_300k.json](https://huggingface.co/datasets/Aman/CreataSet/blob/main/train_300k.json)**
The paired training samples (from CreataSet-ext_112965.jsonl) used for training CrEval (More data may not bring further improvement. See Fig.8 in our paper).
- **[test_paired_3196.jsonl](https://huggingface.co/datasets/Aman/CreataSet/blob/main/test_paired_3196.jsonl)**:
The paired test samples (from CreataSet-test_with_labeling_400.jsonl) used for meta-evaluation of CrEval.
### Brief Intro
We introduce **CreataSet**, a large-scale dataset of over **1M** creative instruction-response pairs across **87** domains. This dataset can facilitate the meta-evaluation of pairwise comparison models for assessing text creativity. Also, this dataset can be used for training creative generation models. More details please refer to our [paper](https://arxiv.org/abs/2505.19236).
<p align="center">
<img src="figs/teaser.png" width="1000"><br/>
Figure 1. An example of how to formulate the problem of text creativity evaluation to evaluate better.
</p>
### Data Construction
<p align="center">
<img src="figs/flowchart.png" width="800"><br/>
Figure 2. The construction process of CreataSet.
</p>
<p align="center">
<img src="figs/combined_cases.png" width="800"><br/>
Figure 3. The examples of three different types of data. The original data are above the dashed line, while our constructed components are below.
</p>
### Data Statistics
<div align="center">
<table>
<tr>
<td align="center">
<img src="figs/domain_dist.png" width="400"/><br/>
Figure 4: Domain Distribution
</td>
<td align="center">
<img src="figs/semantic_dist.png" width="400"/><br/>
Figure 5: Semantic Distribution
</td>
</tr>
</table>
</div>
<div align="center">
<table>
<tr>
<td align="center">
<img src="figs/length_dist.png" width="600"/><br/>
Figure 6: Length Distribution
</td>
<td align="center">
<img src="figs/length_dist2.png" width="600"/><br/>
Figure 7: Length Distribution of Each Data Source
</td>
</tr>
</table>
</div>
**License**: This dataset is made available under the `cc-by-nc-4.0` License. In addition, we respect and uphold the usage terms of the original data providers. If you believe that any part of this dataset affects your legal rights or raises other concerns, please reach out to us. We will carefully review your request and respond without delay.
```
@article{cao2025evaluating,
title={Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator},
author={Cao, Qian and Wang, Xiting and Yuan, Yuzhuo and Liu, Yahui and Luo, Fang and Song, Ruihua},
journal={arXiv preprint arXiv:2505.19236},
year={2025}
}
``` |