Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ - es
5
+ - de
6
+ - fr
7
+ - it
8
+ - ja
9
+ - nl
10
+ - pl
11
+ - pt
12
+ - ru
13
+ - tr
14
+ - bg
15
+ - bn
16
+ - cs
17
+ - da
18
+ - el
19
+ - fa
20
+ - fi
21
+ - hi
22
+ - hu
23
+ - id
24
+ - ko
25
+ - no
26
+ - ro
27
+ - sk
28
+ - sv
29
+ - th
30
+ - uk
31
+ - vi
32
+ - am
33
+ - az
34
+ - bo
35
+ - he
36
+ - hr
37
+ - hy
38
+ - is
39
+ - jv
40
+ - ka
41
+ - kk
42
+ - km
43
+ - ky
44
+ - lo
45
+ - mn
46
+ - mr
47
+ - ms
48
+ - my
49
+ - ne
50
+ - ps
51
+ - si
52
+ - sw
53
+ - ta
54
+ - te
55
+ - tg
56
+ - tl
57
+ - ug
58
+ - ur
59
+ - uz
60
+ - yue
61
+ metrics:
62
+ - bleu
63
+ - comet
64
+ datasets:
65
+ - NiuTrans/LMT-60-sft-data
66
+ base_model:
67
+ - NiuTrans/LMT-60-8B-Base
68
+ license: apache-2.0
69
+ pipeline_tag: translation
70
+ ---
71
+
72
+ ## LMT
73
+ - Paper: [Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs](https://arxiv.org/abs/2511.07003)
74
+ - Github: [LMT](https://github.com/NiuTrans/LMT)
75
+
76
+ **LMT-60** is a suite of **Chinese-English-centric** MMT models trained on **90B tokens** mixed monolingual and bilingual tokens, covering **60 languages across 234 translation directions** and achieving **SOTA performance** among models with similar language coverage.
77
+ We release both the CPT and SFT versions of LMT-60 in four sizes (0.6B/1.7B/4B/8B). All checkpoints are available:
78
+ | Models | Model Link |
79
+ |:------------|:------------|
80
+ | LMT-60-0.6B-Base | [NiuTrans/LMT-60-0.6B-Base](https://huggingface.co/NiuTrans/LMT-60-0.6B-Base) |
81
+ | LMT-60-0.6B | [NiuTrans/LMT-60-0.6B](https://huggingface.co/NiuTrans/LMT-60-0.6B) |
82
+ | LMT-60-1.7B-Base | [NiuTrans/LMT-60-1.7B-Base](https://huggingface.co/NiuTrans/LMT-60-1.7B-Base) |
83
+ | LMT-60-1.7B | [NiuTrans/LMT-60-1.7B](https://huggingface.co/NiuTrans/LMT-60-1.7B) |
84
+ | LMT-60-4B-Base | [NiuTrans/LMT-60-4B-Base](https://huggingface.co/NiuTrans/LMT-60-4B-Base) |
85
+ | LMT-60-4B | [NiuTrans/LMT-60-4B](https://huggingface.co/NiuTrans/LMT-60-4B) |
86
+ | LMT-60-8B-Base | [NiuTrans/LMT-60-8B-Base](https://huggingface.co/NiuTrans/LMT-60-8B-Base) |
87
+ | LMT-60-8B | [NiuTrans/LMT-60-8B](https://huggingface.co/NiuTrans/LMT-60-8B) |
88
+
89
+ Our supervised fine-tuning (SFT) data are released at [NiuTrans/LMT-60-sft-data](https://huggingface.co/datasets/NiuTrans/LMT-60-sft-data)
90
+
91
+ ## Quickstart
92
+
93
+ ```python
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer
95
+
96
+ model_name = "NiuTrans/LMT-60-8B"
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
99
+ model = AutoModelForCausalLM.from_pretrained(model_name)
100
+
101
+ prompt = "Translate the following text from English into Chinese.\nEnglish: The concept came from China where plum blossoms were the flower of choice.\nChinese: "
102
+ messages = [{"role": "user", "content": prompt}]
103
+ text = tokenizer.apply_chat_template(
104
+ messages,
105
+ tokenize=False,
106
+ add_generation_prompt=True,
107
+ )
108
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
109
+
110
+ generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
111
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
112
+
113
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
114
+
115
+ print("response:", outputs)
116
+ ```
117
+
118
+ ## Support Languages
119
+
120
+ | Resource Tier | Languages |
121
+ | :---- | :---- |
122
+ | High-resource Languages (13) | Arabic(ar), English(en), Spanish(es), German(de), French(fr), Italian(it), Japanese(ja), Dutch(nl), Polish(pl), Portuguese(pt), Russian(ru), Turkish(tr), Chinese(zh) |
123
+ | Medium-resource Languages (18) | Bulgarian(bg), Bengali(bn), Czech(cs), Danish(da), Modern Greek(el), Persian(fa), Finnish(fi), Hindi(hi), Hungarian(hu), Indonesian(id), Korean(ko), Norwegian(no), Romanian(ro), Slovak(sk), Swedish(sv), Thai(th), Ukrainian(uk), Vietnamese(vi) |
124
+ | Low-resouce Languages (29) | Amharic(am), Azerbaijani(az), Tibetan(bo), Modern Hebrew(he), Croatian(hr), Armenian(hy), Icelandic(is), Javanese(jv), Georgian(ka), Kazakh(kk), Central Khmer(km), Kirghiz(ky), Lao(lo), Chinese Mongolian(mn_cn), Marathi(mr), Malay(ms), Burmese(my), Nepali(ne), Pashto(ps), Sinhala(si), Swahili(sw), Tamil(ta), Telugu(te), Tajik(tg), Tagalog(tl), Uighur(ug), Urdu(ur), Uzbek(uz), Yue Chinese(yue) |
125
+
126
+ ## Citation
127
+
128
+ If you find our paper useful for your research, please kindly cite our paper:
129
+ ```bash
130
+ @misc{luoyf2025lmt,
131
+ title={Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs},
132
+ author={Yingfeng Luo, Ziqiang Xu, Yuxuan Ouyang, Murun Yang, Dingyang Lin, Kaiyan Chang, Tong Zheng, Bei Li, Peinan Feng, Quan Du, Tong Xiao, Jingbo Zhu},
133
+ year={2025},
134
+ eprint={2511.07003},
135
+ archivePrefix={arXiv},
136
+ primaryClass={cs.CL},
137
+ url={https://arxiv.org/abs/2511.07003},
138
+ }
139
+ ```