Add ChatNow link and API usage section to README - Updated header section with ChatNow link pointing to ZenMux deployment - Added Try Online section with direct link to online experience - Added API Usage section with OpenAI-compatible client example - Enhanced Quickstart section with user-friendly access methods πŸ€– Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>

#2
by thinkthinking - opened
Files changed (1) hide show
  1. README.md +69 -27
README.md CHANGED
@@ -1,24 +1,21 @@
1
  ---
2
  license: mit
3
  base_model:
4
- - inclusionAI/Ling-mini-base-2.0
5
  pipeline_tag: text-generation
6
  library_name: transformers
7
  ---
8
 
9
-
10
-
11
  <p align="center">
12
  <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
13
  <p>
14
 
15
- <p align="center">πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbspπŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
16
-
17
 
18
  ## Introduction
19
 
20
- Today, we are excited to announce the open-sourcing of __Ling 2.0__ β€” a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
21
- The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
22
 
23
  <p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
24
 
@@ -28,25 +25,24 @@ We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (Liv
28
 
29
  ### 7Γ— Equivalent Dense Performance Leverage
30
 
31
- Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7Γ— equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
32
 
33
  ### High-speed Generation at 300+ token/s
34
 
35
  <p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
36
 
37
- The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ β€” more than __2Γ— faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7Γ—__.
38
 
39
  <p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
40
 
41
  ### Open-sourced FP8 Efficient Training Solution
42
 
43
- Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
44
 
45
  ### A More Open Opensource Strategy
46
 
47
  We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training β€” achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
48
- To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
49
-
50
 
51
  ## Model Downloads
52
 
@@ -55,30 +51,65 @@ You can download the following table to see the various stage of Ling-mini-2.0 m
55
  <center>
56
 
57
  | **Model** | **Context Length** | **Download** |
58
- |:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
59
- | Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
60
- | Ling-mini-base-2.0-5T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
61
- | Ling-mini-base-2.0-10T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
62
- | Ling-mini-base-2.0-15T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
63
- | Ling-mini-base-2.0-20T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
64
- | Ling-mini-2.0 | 32K -> 128K (YaRN) | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
65
 
66
  </center>
67
 
68
  Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
69
 
70
-
71
  ## Quickstart
72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ### Convert to safetensors
74
 
75
  Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
76
  If you want to train your model and eval it, you can convert from dcp produced by training.
 
77
  ```shell
78
  python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
79
  ```
80
 
81
  Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
 
82
  - `--force-bf16` for BF16 format.
83
  - `--force-fp8` for FP8 format.
84
 
@@ -181,7 +212,9 @@ vllm serve inclusionAI/Ling-mini-2.0 \
181
  ```
182
 
183
  To handle long context in vLLM using YaRN, we need to follow these two steps:
 
184
  1. Add a `rope_scaling` field to the model's `config.json` file, for example:
 
185
  ```json
186
  {
187
  ...,
@@ -192,24 +225,29 @@ To handle long context in vLLM using YaRN, we need to follow these two steps:
192
  }
193
  }
194
  ```
 
195
  2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
196
 
197
  For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
198
 
199
-
200
  ### SGLang
201
 
202
  #### Environment Preparation
203
 
204
  We will later submit our model to SGLang official release, now we can prepare the environment following steps:
 
205
  ```shell
206
  pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
207
  ```
 
208
  You can use docker image as well:
 
209
  ```shell
210
  docker pull lmsysorg/sglang:v0.5.2rc0-cu126
211
  ```
 
212
  Then you should apply patch to sglang installation:
 
213
  ```shell
214
  # patch command is needed, run `yum install -y patch` if needed
215
  patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
@@ -217,9 +255,10 @@ patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__fil
217
 
218
  #### Run Inference
219
 
220
- BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
221
 
222
  - Start server:
 
223
  ```shell
224
  python -m sglang.launch_server \
225
  --model-path $MODLE_PATH \
@@ -227,16 +266,19 @@ python -m sglang.launch_server \
227
  --trust-remote-code \
228
  --attention-backend fa3
229
  ```
 
230
  MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
231
  to start command.
232
 
233
  - Client:
 
234
  ```shell
235
  curl -s http://localhost:${PORT}/v1/chat/completions \
236
  -H "Content-Type: application/json" \
237
  -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
238
  """
239
  ```
 
240
  More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
241
 
242
  ## Training
@@ -254,11 +296,11 @@ The table below shows the pre-training performance of several models, measured i
254
  <center>
255
 
256
  | **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
257
- |:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
258
- | LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
259
- | Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
260
- | Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
261
- | Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
262
 
263
  </center>
264
 
 
1
  ---
2
  license: mit
3
  base_model:
4
+ - inclusionAI/Ling-mini-base-2.0
5
  pipeline_tag: text-generation
6
  library_name: transformers
7
  ---
8
 
 
 
9
  <p align="center">
10
  <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
11
  <p>
12
 
13
+ <p align="center">πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbspπŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>&nbsp&nbsp | &nbsp&nbspπŸ™ <a href="https://zenmux.ai/inclusionai/ling-mini-2.0">ChatNow</a></p>
 
14
 
15
  ## Introduction
16
 
17
+ Today, we are excited to announce the open-sourcing of **Ling 2.0** β€” a family of MoE-based large language models that combine **SOTA performance** with **high efficiency**.
18
+ The first released version, Ling-mini-2.0, is compact yet powerful. It has **16B total parameters**, but only **1.4B** are activated per input token (non-embedding 789M). Trained on more than **20T tokens** of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
19
 
20
  <p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
21
 
 
25
 
26
  ### 7Γ— Equivalent Dense Performance Leverage
27
 
28
+ Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a **1/32 activation ratio** MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over **7Γ— equivalent dense performance**. In other words, **Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model**.
29
 
30
  ### High-speed Generation at 300+ token/s
31
 
32
  <p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
33
 
34
+ The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), **Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)** β€” more than **2Γ— faster** than an 8B dense model. Ling-mini-2.0 is able to handle **128K context length** with YaRN, as sequence length increases, the relative speedup can reach **over 7Γ—**.
35
 
36
  <p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
37
 
38
  ### Open-sourced FP8 Efficient Training Solution
39
 
40
+ Ling 2.0 employs **FP8 mixed-precision training** throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our **FP8 training solution**. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, **Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled**.
41
 
42
  ### A More Open Opensource Strategy
43
 
44
  We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training β€” achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
45
+ To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing **five pretraining checkpoints**: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
 
46
 
47
  ## Model Downloads
48
 
 
51
  <center>
52
 
53
  | **Model** | **Context Length** | **Download** |
54
+ | :--------------------: | :----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
55
+ | Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
56
+ | Ling-mini-base-2.0-5T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
57
+ | Ling-mini-base-2.0-10T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
58
+ | Ling-mini-base-2.0-15T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
59
+ | Ling-mini-base-2.0-20T | 4K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
60
+ | Ling-mini-2.0 | 32K -> 128K (YaRN) | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[πŸ€– ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
61
 
62
  </center>
63
 
64
  Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
65
 
 
66
  ## Quickstart
67
 
68
+ ### πŸš€ Try Online
69
+
70
+ You can experience Ling-mini-2.0 online at: [ZenMux](https://zenmux.ai/inclusionai/ling-mini-2.0)
71
+
72
+ ### πŸ”Œ API Usage
73
+
74
+ You can also use Ling-mini-2.0 through API calls:
75
+
76
+ ```python
77
+ from openai import OpenAI
78
+
79
+ # 1. Initialize the OpenAI client
80
+ client = OpenAI(
81
+ # 2. Point the base URL to the ZenMux endpoint
82
+ base_url="https://zenmux.ai/api/v1",
83
+ # 3. Replace with the API Key from your ZenMux user console
84
+ api_key="<your ZENMUX_API_KEY>",
85
+ )
86
+
87
+ # 4. Make a request
88
+ completion = client.chat.completions.create(
89
+ # 5. Specify the model to use in the format "provider/model-name"
90
+ model="inclusionAI/Ling-mini-2.0",
91
+ messages=[
92
+ {
93
+ "role": "user",
94
+ "content": "What is the meaning of life?"
95
+ }
96
+ ]
97
+ )
98
+
99
+ print(completion.choices[0].message.content)
100
+ ```
101
+
102
  ### Convert to safetensors
103
 
104
  Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
105
  If you want to train your model and eval it, you can convert from dcp produced by training.
106
+
107
  ```shell
108
  python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
109
  ```
110
 
111
  Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
112
+
113
  - `--force-bf16` for BF16 format.
114
  - `--force-fp8` for FP8 format.
115
 
 
212
  ```
213
 
214
  To handle long context in vLLM using YaRN, we need to follow these two steps:
215
+
216
  1. Add a `rope_scaling` field to the model's `config.json` file, for example:
217
+
218
  ```json
219
  {
220
  ...,
 
225
  }
226
  }
227
  ```
228
+
229
  2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
230
 
231
  For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
232
 
 
233
  ### SGLang
234
 
235
  #### Environment Preparation
236
 
237
  We will later submit our model to SGLang official release, now we can prepare the environment following steps:
238
+
239
  ```shell
240
  pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
241
  ```
242
+
243
  You can use docker image as well:
244
+
245
  ```shell
246
  docker pull lmsysorg/sglang:v0.5.2rc0-cu126
247
  ```
248
+
249
  Then you should apply patch to sglang installation:
250
+
251
  ```shell
252
  # patch command is needed, run `yum install -y patch` if needed
253
  patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
 
255
 
256
  #### Run Inference
257
 
258
+ BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
259
 
260
  - Start server:
261
+
262
  ```shell
263
  python -m sglang.launch_server \
264
  --model-path $MODLE_PATH \
 
266
  --trust-remote-code \
267
  --attention-backend fa3
268
  ```
269
+
270
  MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
271
  to start command.
272
 
273
  - Client:
274
+
275
  ```shell
276
  curl -s http://localhost:${PORT}/v1/chat/completions \
277
  -H "Content-Type: application/json" \
278
  -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
279
  """
280
  ```
281
+
282
  More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
283
 
284
  ## Training
 
296
  <center>
297
 
298
  | **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
299
+ | :---------------------: | :------------------------: | :-------------------------: | :-------------------------: |
300
+ | LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
301
+ | Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
302
+ | Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
303
+ | Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
304
 
305
  </center>
306