Update model card: correct license, add pipeline_tag and library_name
Browse filesThis PR aims to significantly improve the model card by making the following updates:
- **Corrected License**: The `license` in the metadata has been updated from `mit` to `apache-2.0` to accurately reflect the license stated in both the current model card content and the associated GitHub repository.
- **Added Pipeline Tag**: The `pipeline_tag: text-generation` has been added. This categorizes the model as a language model performing text generation tasks (generating reasoning/evaluations for MT) and improves its discoverability on the Hugging Face Hub.
- **Added Library Name**: The `library_name: transformers` has been included. Evidence from `config.json` (`transformers_version`) and the GitHub README's acknowledgments (mentioning `transformers` for model/data loading) confirms compatibility. This will enable an automated "how to use" code snippet on the model page, enhancing user experience.
- **Enriched Content**: The model card content has been expanded to include the paper abstract and key sections from the GitHub README, such as "Introduction", "Quick Start", "Configuration" (with detailed model templates and decoding recommendations), and "Meta-Evaluation". This provides a more comprehensive overview of the model, its usage, and its performance.
Existing relevant information, such as the paper link, GitHub repository link, and original badges, has been preserved.
|
@@ -1,7 +1,9 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
datasets:
|
| 4 |
- rzzhan/ThinMQM-12k
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
<div align="center">
|
|
@@ -14,6 +16,10 @@ datasets:
|
|
| 14 |
|
| 15 |
</div>
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
## Metrics
|
| 18 |
|
| 19 |
| Metric/Model | Avg. | En-De SPA (%) | En-De $Acc^*_eq$ | En-Es SPA (%) | En-Es $Acc^*_eq$ | Ja-Zh SPA (%) | Ja-Zh $Acc^*_eq$ |
|
|
@@ -25,40 +31,186 @@ datasets:
|
|
| 25 |
| R1-Distill-Qwen-7B | 61.1 | 67.3 | 42.9 | 61.0 | 68.0 | 83.8 | 43.5 |
|
| 26 |
| *+ ThinMQM* | 69.8 (+8.7) | 84.5 (+17.2) | 48.5 (+5.6) | 77.8 (+16.8) | 68.0 (+0.0) | 89.0 (+5.2) | 51.3 (+7.8) |
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
### Model & Data Card
|
| 30 |
|
| 31 |
| **Released Models** | **HF Model** | **Template** | **Trained Dataset** |
|
| 32 |
|-----------------------------------|------------------|------------------|------------------|
|
| 33 |
| rzzhan/ThinMQM-32B | https://huggingface.co/rzzhan/ThinMQM-32B | `thinking` | https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ `thinmqm12k_src`|
|
| 34 |
-
| rzzhan/ThinMQM-8B | https://huggingface.co/rzzhan/ThinMQM-
|
| 35 |
-
| rzzhan/ThinMQM-7B | https://huggingface.co/rzzhan/ThinMQM-
|
| 36 |
|
|
|
|
| 37 |
|
| 38 |
-
##
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
```
|
| 52 |
|
| 53 |
---
|
| 54 |
|
| 55 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
For questions, feedback, or collaboration opportunities, feel free to reach out:
|
| 58 |
- Runzhe Zhan: nlp2ct.runzhe@gmail.com
|
| 59 |
|
| 60 |
---
|
| 61 |
|
| 62 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- rzzhan/ThinMQM-12k
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
---
|
| 8 |
|
| 9 |
<div align="center">
|
|
|
|
| 16 |
|
| 17 |
</div>
|
| 18 |
|
| 19 |
+
## Abstract
|
| 20 |
+
|
| 21 |
+
Recent advancements in large reasoning models (LRMs) have introduced an intermediate "thinking" process prior to generating final answers, improving their reasoning capabilities on complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to "overthink" simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the potential of efficiently calibrated LRMs to advance fine-grained automatic MT evaluation.
|
| 22 |
+
|
| 23 |
## Metrics
|
| 24 |
|
| 25 |
| Metric/Model | Avg. | En-De SPA (%) | En-De $Acc^*_eq$ | En-Es SPA (%) | En-Es $Acc^*_eq$ | Ja-Zh SPA (%) | Ja-Zh $Acc^*_eq$ |
|
|
|
|
| 31 |
| R1-Distill-Qwen-7B | 61.1 | 67.3 | 42.9 | 61.0 | 68.0 | 83.8 | 43.5 |
|
| 32 |
| *+ ThinMQM* | 69.8 (+8.7) | 84.5 (+17.2) | 48.5 (+5.6) | 77.8 (+16.8) | 68.0 (+0.0) | 89.0 (+5.2) | 51.3 (+7.8) |
|
| 33 |
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
# 📖 Introduction
|
| 37 |
+
|
| 38 |
+
Evaluating machine translation (MT) quality is a complex task that extends beyond simple string matching.
|
| 39 |
+
Large Reasoning Models (LRMs) are capable of modeling intricate reasoning processes, yet their role in MT evaluation remains insufficiently understood.
|
| 40 |
+
In this work, we present a systematic investigation into the use of LRMs as evaluators for MT quality, specifically exploring their ability to replicate the Multidimensional Quality Metrics (MQM) assessment process. Our analysis across various LRMs reveals that evaluation materials must be carefully tailored, as these models tend to overanalyze simple cases and exhibit overestimation biases.
|
| 41 |
+
To address these challenges, we introduce a simple yet effective method for calibrating LRM reasoning by training them on synthetic, human-like MQM evaluation trajectories.
|
| 42 |
+
Our experiments show that this approach not only reduces the thinking budget required by LRMs but also enhances evaluation performance across different model scales.
|
| 43 |
+
These findings underscore the potential of efficiently calibrated LRMs to advance fine-grained, automatic MT evaluation.
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
# 🚀 Quick Start
|
| 48 |
+
|
| 49 |
+
### Installation
|
| 50 |
+
|
| 51 |
+
```bash
|
| 52 |
+
# Clone the repository
|
| 53 |
+
git clone https://github.com/NLP2CT/ThinMQM.git
|
| 54 |
+
cd ThinMQM
|
| 55 |
+
|
| 56 |
+
# Install dependencies
|
| 57 |
+
pip install -r requirements.txt
|
| 58 |
+
|
| 59 |
+
# Install mt-metrics-eval evaluation package & Prepare benchmark data
|
| 60 |
+
git clone https://github.com/google-research/mt-metrics-eval.git
|
| 61 |
+
cd mt-metrics-eval
|
| 62 |
+
pip install .
|
| 63 |
+
mkdir $HOME/.mt-metrics-eval
|
| 64 |
+
cd $HOME/.mt-metrics-eval
|
| 65 |
+
wget https://storage.googleapis.com/mt-metrics-eval/mt-metrics-eval-v2.tgz
|
| 66 |
+
tar xfz mt-metrics-eval-v2.tgz
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Basic Usage
|
| 70 |
+
|
| 71 |
+
#### 1. Complete Workflow for running WMT24 experiments.
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
# Step 1: Generate responses (using existing scripts)
|
| 75 |
+
# For ThinMQM model
|
| 76 |
+
bash scripts/run_thinmqm.sh
|
| 77 |
+
# For general-purpose LRMs using GEMBA prompt
|
| 78 |
+
bash scripts/run_gemba.sh
|
| 79 |
+
|
| 80 |
+
# Step 2: Extract answers and run meta-evaluation
|
| 81 |
+
bash scripts/run_metaeval.sh
|
| 82 |
+
```
|
| 83 |
+
> Please refer to the comments in the scripts to adjust for your environment. For hyperparameter options, see [🔧Configuration](#configuration).
|
| 84 |
+
|
| 85 |
+
#### 2. Custom Input Files.
|
| 86 |
+
|
| 87 |
+
You can evaluate your own translation data with custom input files:
|
| 88 |
+
|
| 89 |
+
**Example Data:**
|
| 90 |
+
```bash
|
| 91 |
+
# Run the example script to see how it works
|
| 92 |
+
python example_custom_evaluation.py
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
**Example CLI Usage:**
|
| 96 |
+
```bash
|
| 97 |
+
MODEL_NAME_OR_PATH="/path/to/rzzhan/ThinMQM-32B" # Replace with your actual model path
|
| 98 |
+
|
| 99 |
+
# Set your data paths
|
| 100 |
+
SOURCE_FILE="cli_example_data/source.txt"
|
| 101 |
+
REFERENCE_FILE="cli_example_data/reference.txt"
|
| 102 |
+
SYSTEM_OUTPUTS_DIR="cli_example_data/system_outputs"
|
| 103 |
+
OUTPUT_DIR="cli_example_data/results"
|
| 104 |
+
|
| 105 |
+
SOURCE_LANG="English"
|
| 106 |
+
TARGET_LANG="Chinese"
|
| 107 |
+
TEMPLATE="thinking" # For ThinMQM: "thinking" (32B) or "thinking_ref" (7/8B)
|
| 108 |
+
|
| 109 |
+
# Run ThinMQM evaluation
|
| 110 |
+
python main.py custom_thinmqm \
|
| 111 |
+
--model_name="$MODEL_NAME_OR_PATH" \
|
| 112 |
+
--source_file="$SOURCE_FILE" \
|
| 113 |
+
--reference_file="$REFERENCE_FILE" \
|
| 114 |
+
--system_outputs="$SYSTEM_OUTPUTS_DIR" \
|
| 115 |
+
--output_dir="$OUTPUT_DIR" \
|
| 116 |
+
--source_lang="$SOURCE_LANG" \
|
| 117 |
+
--target_lang="$TARGET_LANG" \
|
| 118 |
+
--template="$TEMPLATE" \
|
| 119 |
+
--max_new_tokens=4096 \
|
| 120 |
+
--temperature=0.6
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
# 🔧 Configuration
|
| 126 |
+
|
| 127 |
+
### 📁 Project Structure
|
| 128 |
+
|
| 129 |
+
```
|
| 130 |
+
├── config/ # Configuration management
|
| 131 |
+
│ └── experiment_config.py
|
| 132 |
+
├── evaluators/ # Specific evaluator implementations
|
| 133 |
+
│ ├── base_evaluator.py # Core base classes
|
| 134 |
+
│ ├── thinmqm_evaluator.py
|
| 135 |
+
│ ├── gemba_evaluator.py
|
| 136 |
+
│ └── meta_evaluator.py
|
| 137 |
+
├── utils/ # Utility functions
|
| 138 |
+
│ ├── answer_extractor.py
|
| 139 |
+
│ ├── template_utils.py
|
| 140 |
+
│ ├── mqm_parser.py
|
| 141 |
+
│ └── process_results.py
|
| 142 |
+
├── scripts/ # Shell scripts for easy execution
|
| 143 |
+
│ ├── run_thinmqm.sh
|
| 144 |
+
│ ├── run_gemba.sh
|
| 145 |
+
│ └── run_pipeline.sh
|
| 146 |
+
├── main.py # Main entry point
|
| 147 |
+
└── meta_eval_pipeline.md # Meta-evaluation entry point
|
| 148 |
+
```
|
| 149 |
|
| 150 |
### Model & Data Card
|
| 151 |
|
| 152 |
| **Released Models** | **HF Model** | **Template** | **Trained Dataset** |
|
| 153 |
|-----------------------------------|------------------|------------------|------------------|
|
| 154 |
| rzzhan/ThinMQM-32B | https://huggingface.co/rzzhan/ThinMQM-32B | `thinking` | https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ `thinmqm12k_src`|
|
| 155 |
+
| rzzhan/ThinMQM-8B | https://huggingface.co/rzzhan/ThinMQM-8B | `thinking_ref` | https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ `thinmqm12k_ref` |
|
| 156 |
+
| rzzhan/ThinMQM-7B | https://huggingface.co/rzzhan/ThinMQM-7B | `thinking_ref` | https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ `thinmqm12k_ref` |
|
| 157 |
|
| 158 |
+
> Recommended decoding with `temperature=0.6, top_p=0.95`.
|
| 159 |
|
| 160 |
+
#### ThinMQM Model Templates
|
| 161 |
+
- **thinking**: Source + translation evaluation
|
| 162 |
+
- **thinking_ref**: Source + reference + translation evaluation
|
| 163 |
|
| 164 |
+
#### GEMBA Templates
|
| 165 |
+
- **src**: Source + translation evaluation
|
| 166 |
+
- **ref**: Reference + translation evaluation
|
| 167 |
+
- **joint**: Source + reference + translation evaluation
|
| 168 |
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
# 📊 Meta-Evaluation
|
| 172 |
+
|
| 173 |
+
ThinMQM reduces thinking budgets while improving the evaluation performance of LRMs at different model scales.
|
| 174 |
+
|
| 175 |
+
<div align="center">
|
| 176 |
+
<img src="figures/meta-evaluation.png" alt="meta-eval" style="width: 96%; height: auto;">
|
| 177 |
+
</div>
|
|
|
|
| 178 |
|
| 179 |
---
|
| 180 |
|
| 181 |
+
# ✨ Acknowledgments
|
| 182 |
+
|
| 183 |
+
We thank the open-source community for the excellent tools and libraries that made this work possible, including:
|
| 184 |
+
- [vLLM](https://github.com/vllm-project/vllm) for efficient LLM inference
|
| 185 |
+
- [transformers](https://github.com/huggingface/transformers) for model/data loading, hosting
|
| 186 |
+
- [mt-metrics-eval](https://github.com/google-research/mt-metrics-eval) for meta-evaluation library and data
|
| 187 |
+
|
| 188 |
+
---
|
| 189 |
+
|
| 190 |
+
# 📬 Contact
|
| 191 |
|
| 192 |
For questions, feedback, or collaboration opportunities, feel free to reach out:
|
| 193 |
- Runzhe Zhan: nlp2ct.runzhe@gmail.com
|
| 194 |
|
| 195 |
---
|
| 196 |
|
| 197 |
+
# 📄 License
|
| 198 |
+
|
| 199 |
+
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
|
| 200 |
+
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
# 📝 Citation
|
| 204 |
+
|
| 205 |
+
If you find our model, data, or evaluation code useful, please kindly cite our paper:
|
| 206 |
|
| 207 |
+
```bibtex
|
| 208 |
+
@article{zhan2025thinmqm,
|
| 209 |
+
title={Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost},
|
| 210 |
+
author={Zhan, Runzhe and Huang, Zhihong and Yang, Xinyi and Chao, Lidia S and Yang, Min and Wong, Derek F},
|
| 211 |
+
year={2025},
|
| 212 |
+
journal = {ArXiv preprint},
|
| 213 |
+
volume = {2510.20780},
|
| 214 |
+
url={https://arxiv.org/abs/2510.20780},
|
| 215 |
+
}
|
| 216 |
+
```
|