mackenzietechdocs commited on
Commit
d8bedc7
Β·
verified Β·
1 Parent(s): 6126819

Add Artificial Analysis evaluations for kimi-k2-thinking

Browse files

This commit adds structured evaluation results to the model card. The results are formatted using the model-index specification and will be displayed in the model card's evaluation widget.

Files changed (1) hide show
  1. README.md +332 -281
README.md CHANGED
@@ -1,281 +1,332 @@
1
- ---
2
- license: other
3
- license_name: modified-mit
4
- library_name: transformers
5
- ---
6
- <div align="center">
7
- <picture>
8
- <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
9
- </picture>
10
- </div>
11
- <hr>
12
-
13
- <div align="center" style="line-height:1">
14
- <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/πŸ€–%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
15
- <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
16
- </div>
17
-
18
- <div align="center" style="line-height: 1;">
19
- <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
20
- <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
21
- <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
22
- </div>
23
- <div align="center" style="line-height: 1;">
24
- <a href="https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
25
- </div>
26
-
27
- <p align="center">
28
- <b>πŸ“°&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/thinking.html">Tech Blog</a></b>
29
- </p>
30
-
31
-
32
- ## 1. Model Introduction
33
-
34
- Kimi K2 Thinking is the latest, most capable version of open-source thinking model. Starting with Kimi K2, we built it as a thinking agent that reasons step-by-step while dynamically invoking tools. It sets a new state-of-the-art on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks by dramatically scaling multi-step reasoning depth and maintaining stable tool-use across 200–300 sequential calls. At the same time, K2 Thinking is a native INT4 quantization model with 256k context window, achieving lossless reductions in inference latency and GPU memory usage.
35
-
36
- ### Key Features
37
- - **Deep Thinking & Tool Orchestration**: End-to-end trained to interleave chain-of-thought reasoning with function calls, enabling autonomous research, coding, and writing workflows that last hundreds of steps without drift.
38
- - **Native INT4 Quantization**: Quantization-Aware Training (QAT) is employed in post-training stage to achieve lossless 2x speed-up in low-latency mode.
39
- - **Stable Long-Horizon Agency**: Maintains coherent goal-directed behavior across up to 200–300 consecutive tool invocations, surpassing prior models that degrade after 30–50 steps.
40
-
41
-
42
- ## 2. Model Summary
43
-
44
- <div align="center">
45
-
46
-
47
- | | |
48
- |:---:|:---:|
49
- | **Architecture** | Mixture-of-Experts (MoE) |
50
- | **Total Parameters** | 1T |
51
- | **Activated Parameters** | 32B |
52
- | **Number of Layers** (Dense layer included) | 61 |
53
- | **Number of Dense Layers** | 1 |
54
- | **Attention Hidden Dimension** | 7168 |
55
- | **MoE Hidden Dimension** (per Expert) | 2048 |
56
- | **Number of Attention Heads** | 64 |
57
- | **Number of Experts** | 384 |
58
- | **Selected Experts per Token** | 8 |
59
- | **Number of Shared Experts** | 1 |
60
- | **Vocabulary Size** | 160K |
61
- | **Context Length** | 256K |
62
- | **Attention Mechanism** | MLA |
63
- | **Activation Function** | SwiGLU |
64
- </div>
65
-
66
- ## 3. Evaluation Results
67
-
68
- **Reasoning Tasks**
69
- | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 | Grok-4 |
70
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|:-------:|
71
- | **HLE (Text-only)** | no tools | 23.9 | 26.3 | 19.8* | 7.9 | 19.8 | 25.4 |
72
- | | w/ tools | 44.9 | 41.7* | 32.0* | 21.7 | 20.3* | 41.0 |
73
- | | heavy | 51.0 | 42.0 | - | - | - | 50.7 |
74
- | **AIME25** | no tools | 94.5 | 94.6 | 87.0 | 51.0 | 89.3 | 91.7 |
75
- | | w/ python | 99.1 | 99.6 | 100.0 | 75.2 | 58.1* | 98.8 |
76
- | | heavy | 100.0 | 100.0 | - | - | - | 100.0 |
77
- | **HMMT25** | no tools | 89.4 | 93.3 | 74.6* | 38.8 | 83.6 | 90.0 |
78
- | | w/ python | 95.1 | 96.7 | 88.8* | 70.4 | 49.5* | 93.9 |
79
- | | heavy | 97.5 | 100.0 | - | - | - | 96.7 |
80
- | **IMO-AnswerBench** | no tools | 78.6 | 76.0* | 65.9* | 45.8 | 76.0* | 73.1 |
81
- | **GPQA** | no tools | 84.5 | 85.7 | 83.4 | 74.2 | 79.9 | 87.5 |
82
-
83
- **General Tasks**
84
- | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
85
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
86
- | **MMLU-Pro** | no tools | 84.6 | 87.1 | 87.5 | 81.9 | 85.0 |
87
- | **MMLU-Redux** | no tools | 94.4 | 95.3 | 95.6 | 92.7 | 93.7 |
88
- | **Longform Writing** | no tools | 73.8 | 71.4 | 79.8 | 62.8 | 72.5 |
89
- | **HealthBench** | no tools | 58.0 | 67.2 | 44.2 | 43.8 | 46.9 |
90
-
91
- **Agentic Search Tasks**
92
- | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
93
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
94
- | **BrowseComp** | w/ tools | 60.2 | 54.9 | 24.1 | 7.4 | 40.1 |
95
- | **BrowseComp-ZH** | w/ tools | 62.3 | 63.0* | 42.4* | 22.2 | 47.9 |
96
- | **Seal-0** | w/ tools | 56.3 | 51.4* | 53.4* | 25.2 | 38.5* |
97
- | **FinSearchComp-T3** | w/ tools | 47.4 | 48.5* | 44.0* | 10.4 | 27.0* |
98
- | **Frames** | w/ tools | 87.0 | 86.0* | 85.0* | 58.1 | 80.2* |
99
-
100
- **Coding Tasks**
101
- | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
102
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
103
- | **SWE-bench Verified** | w/ tools | 71.3 | 74.9 | 77.2 | 69.2 | 67.8 |
104
- | **SWE-bench Multilingual** | w/ tools | 61.1 | 55.3* | 68.0 | 55.9 | 57.9 |
105
- | **Multi-SWE-bench** | w/ tools | 41.9 | 39.3* | 44.3 | 33.5 | 30.6 |
106
- | **SciCode** | no tools | 44.8 | 42.9 | 44.7 | 30.7 | 37.7 |
107
- | **LiveCodeBenchV6** | no tools | 83.1 | 87.0* | 64.0* | 56.1* | 74.1 |
108
- | **OJ-Bench (cpp)** | no tools | 48.7 | 56.2* | 30.4* | 25.5* | 38.2* |
109
- | **Terminal-Bench** | w/ simulated tools (JSON) | 47.1 | 43.8 | 51.0 | 44.5 | 37.7 |
110
- <details>
111
- <summary><b>Footnotes</b></summary>
112
-
113
- 1. To ensure a fast, lightweight experience, we selectively employ a subset of tools and reduce the number of tool call steps under the chat mode on kimi.com. As a result, chatting on kimi.com may not reproduce our benchmark scores. Our agentic mode will be updated soon to reflect the full capabilities of K2 Thinking.
114
-
115
- 2. **Testing Details**:
116
-  2.1. All benchmarks were evaluated at temperature = 1.0 and 256 k context length for K2 Thinking, except for SciCode, for which we followed the official temperature setting of 0.0.
117
-  2.2. HLE (no tools), AIME25, HMMT25, and GPQA were capped at a 96k thinking-token budget, while IMO-Answer Bench, LiveCodeBench and OJ-Bench were capped at a 128k thinking-token budget. Longform Writing was capped at a 32k completion-token budget.
118
-  2.3. For AIME and HMMT (no tools), we report the average of 32 runs (avg@32). For AIME and HMMT (with Python), we report the average of 16 runs (avg@16). For IMO-AnswerBench, we report the average of 8 runs (avg@8).
119
-
120
- 3. **Baselines**:
121
-  3.1 GPT-5, Claude-4.5-sonnet, Grok-4 results and DeepSeek-V3.2 results are quoted from the [GPT-5 post](https://openai.com/index/introducing-gpt-5/), [GPT-5 for Developers post](https://openai.com/index/introducing-gpt-5-for-developers/), [GPT-5 system card](https://openai.com/index/gpt-5-system-card/), [claude-sonnet-4-5 post](https://www.anthropic.com/news/claude-sonnet-4-5), [grok-4 post](https://x.ai/news/grok-4), [deepseek-v3.2 post](https://api-docs.deepseek.com/news/news250929), the [public Terminal-Bench leaderboard](https://www.tbench.ai/leaderboard) (Terminus-2), the [public Vals AI leaderboard](https://vals.ai/) and [artificialanalysis](https://artificialanalysis.ai/). Benchmarks for which no available public scores were re-tested under the same conditions used for k2 thinking and are marked with an asterisk(*). For the GPT-5 test, we set the reasoning effort to high.
122
-  3.2 The GPT-5 and Grok-4 on the HLE full set with tools are 35.2 and 38.6 from the official posts. In our internal evaluation on the HLE text-only subset, GPT-5 scores 41.7 and Grok-4 scores 38.6 (Grok-4’s launch cited 41.0 on the text-only subset). For GPT-5's HLE text-only w/o tool, we use score from <a href="https://scale.com/leaderboard/humanitys_last_exam_text_only" target="_blank">Scale.ai</a>. The official GPT5 HLE full set w/o tool is 24.8.
123
-  3.3 For <a href="https://aclanthology.org/2025.emnlp-main.1794.pdf" target="_blank">IMO-AnswerBench</a>: GPT-5 scored 65.6 in the benchmark paper. We re-evaluated GPT-5 with official API and obtained a score of 76.
124
-
125
- 4. **For HLE (w/ tools) and the agentic-search benchmarks**:
126
-  4.1. K2 Thinking was equipped with search, code-interpreter, and web-browsing tools.
127
-  4.2. BrowseComp-ZH, Seal-0 and FinSearchComp-T3 were run 4 times independently and the average is reported (avg@4).
128
-  4.3. The evaluation used o3-mini as judge, configured identically to the official HLE setting; judge prompts were taken verbatim from the official repository.
129
-  4.4. On HLE, the maximum step limit was 120, with a 48 k-token reasoning budget per step; on agentic-search tasks, the limit was 300 steps with a 24 k-token reasoning budget per step.
130
-  4.5. When tool execution results cause the accumulated input to exceed the model's context limit (256k), we employ a simple context management strategy that hides all previous tool outputs.
131
-  4.6. The web access to Hugging Face may lead to data leakage in certain benchmark tests, such as HLE. K2 Thinking can achieve a score of 51.3 on HLE without blocking Hugging Face. To ensure a fair and rigorous comparison, we blocked access to Hugging Face during testing.
132
-
133
- 5. **For Coding Tasks**:
134
-  5.1. Terminal-Bench scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser.
135
-  5.2. For other coding tasks, the result was produced with our in-house evaluation harness. The harness is derived from SWE-agent, but we clamp the context windows of the Bash and Edit tools and rewrite the system prompt to match the task semantics.
136
-  5.3. All reported scores of coding tasks are averaged over 5 independent runs.
137
-
138
- 6. **Heavy Mode**: K2 Thinking Heavy Mode employs an efficient parallel strategy: it first rolls out eight trajectories simultaneously, then reflectively aggregates all outputs to generate the final result. Heavy mode for GPT-5 denotes the official GPT-5 Pro score.
139
- </details>
140
-
141
- ## 4. Native INT4 Quantization
142
-
143
- Low-bit quantization is an effective way to reduce inference latency and GPU memory usage on large-scale inference servers. However, thinking models use excessive decoding lengths, and thus quantization often results in substantial performance drops.
144
-
145
- To overcome this challenge, we adopt Quantization-Aware Training (QAT) during the post-training phase, applying INT4 weight-only quantization to the MoE components. It allows K2 Thinking to support native INT4 inference with a roughly 2x generation speed improvement while achieving state-of-the-art performance. All benchmark results are reported under INT4 precision.
146
-
147
- The checkpoints are saved in compressed-tensors format, supported by most of mainstream inference engine. If you need the checkpoints in higher precision such as FP8 or BF16, you can refer to [official repo of compressed-tensors](https://github.com/vllm-project/compressed-tensors) to unpack the int4 weights and convert to any higher precision.
148
-
149
- ## 5. Deployment
150
- > [!Note]
151
- > You can access K2 Thinking's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
152
-
153
- Currently, Kimi-K2-Thinking is recommended to run on the following inference engines:
154
-
155
- * vLLM
156
- * SGLang
157
- * KTransformers
158
-
159
- Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
160
-
161
- ---
162
-
163
- ## 6. Model Usage
164
-
165
- ### Chat Completion
166
-
167
- Once the local inference service is up, you can interact with it through the chat endpoint:
168
-
169
- ```python
170
- def simple_chat(client: openai.OpenAI, model_name: str):
171
- messages = [
172
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
173
- {"role": "user", "content": [{"type": "text", "text": "which one is bigger, 9.11 or 9.9? think carefully."}]},
174
- ]
175
- response = client.chat.completions.create(
176
- model=model_name,
177
- messages=messages,
178
- stream=False,
179
- temperature=1.0,
180
- max_tokens=4096
181
- )
182
- print(f"k2 answer: {response.choices[0].message.content}")
183
- print("=====below is reasoning content======")
184
- print(f"reasoning content: {response.choices[0].message.reasoning_content}")
185
- ```
186
-
187
- > [!NOTE]
188
- > The recommended temperature for Kimi-K2-Thinking is `temperature = 1.0`.
189
- > If no special instructions are required, the system prompt above is a good default.
190
-
191
- ---
192
-
193
- ### Tool Calling
194
-
195
- Kimi-K2-Thinking has the same tool calling settings as Kimi-K2-Instruct.
196
-
197
- To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
198
-
199
- The following example demonstrates calling a weather tool end-to-end:
200
-
201
- ```python
202
- # Your tool implementation
203
- def get_weather(city: str) -> dict:
204
- return {"weather": "Sunny"}
205
- # Tool schema definition
206
- tools = [{
207
- "type": "function",
208
- "function": {
209
- "name": "get_weather",
210
- "description": "Retrieve current weather information. Call this when the user asks about the weather.",
211
- "parameters": {
212
- "type": "object",
213
- "required": ["city"],
214
- "properties": {
215
- "city": {
216
- "type": "string",
217
- "description": "Name of the city"
218
- }
219
- }
220
- }
221
- }
222
- }]
223
- # Map tool names to their implementations
224
- tool_map = {
225
- "get_weather": get_weather
226
- }
227
- def tool_call_with_client(client: OpenAI, model_name: str):
228
- messages = [
229
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
230
- {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
231
- ]
232
- finish_reason = None
233
- while finish_reason is None or finish_reason == "tool_calls":
234
- completion = client.chat.completions.create(
235
- model=model_name,
236
- messages=messages,
237
- temperature=1.0,
238
- tools=tools, # tool list defined above
239
- tool_choice="auto"
240
- )
241
- choice = completion.choices[0]
242
- finish_reason = choice.finish_reason
243
- if finish_reason == "tool_calls":
244
- messages.append(choice.message)
245
- for tool_call in choice.message.tool_calls:
246
- tool_call_name = tool_call.function.name
247
- tool_call_arguments = json.loads(tool_call.function.arguments)
248
- tool_function = tool_map[tool_call_name]
249
- tool_result = tool_function(**tool_call_arguments)
250
- print("tool_result:", tool_result)
251
- messages.append({
252
- "role": "tool",
253
- "tool_call_id": tool_call.id,
254
- "name": tool_call_name,
255
- "content": json.dumps(tool_result)
256
- })
257
- print("-" * 100)
258
- print(choice.message.content)
259
- ```
260
-
261
- The `tool_call_with_client` function implements the pipeline from user query to tool execution.
262
- This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
263
- For more information, see the [Tool Calling Guide](docs/tool_call_guidance.md).
264
-
265
- ---
266
-
267
- ## 7. License
268
-
269
- Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
270
-
271
- ---
272
-
273
- ## 8. Third Party Notices
274
-
275
- See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
276
-
277
- ---
278
-
279
- ## 9. Contact Us
280
-
281
- If you have any questions, please reach out at [support@moonshot.cn](mailto:[email protected]).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: modified-mit
4
+ library_name: transformers
5
+ model-index:
6
+ - name: Kimi-K2-Thinking
7
+ results:
8
+ - task:
9
+ type: evaluation
10
+ dataset:
11
+ name: Artificial Analysis Benchmarks
12
+ type: artificial_analysis
13
+ metrics:
14
+ - name: Artificial Analysis Intelligence Index
15
+ type: artificial_analysis_intelligence_index
16
+ value: 67
17
+ - name: Artificial Analysis Coding Index
18
+ type: artificial_analysis_coding_index
19
+ value: 52.2
20
+ - name: Artificial Analysis Math Index
21
+ type: artificial_analysis_math_index
22
+ value: 94.7
23
+ - name: Mmlu Pro
24
+ type: mmlu_pro
25
+ value: 0.848
26
+ - name: Gpqa
27
+ type: gpqa
28
+ value: 0.838
29
+ - name: Hle
30
+ type: hle
31
+ value: 0.223
32
+ - name: Livecodebench
33
+ type: livecodebench
34
+ value: 0.853
35
+ - name: Scicode
36
+ type: scicode
37
+ value: 0.424
38
+ - name: Aime 25
39
+ type: aime_25
40
+ value: 0.947
41
+ - name: Ifbench
42
+ type: ifbench
43
+ value: 0.681
44
+ - name: Lcr
45
+ type: lcr
46
+ value: 0.663
47
+ - name: Terminalbench Hard
48
+ type: terminalbench_hard
49
+ value: 0.291
50
+ - name: Tau2
51
+ type: tau2
52
+ value: 0.93
53
+ source:
54
+ name: Artificial Analysis API
55
+ url: https://artificialanalysis.ai
56
+ ---
57
+ <div align="center">
58
+ <picture>
59
+ <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
60
+ </picture>
61
+ </div>
62
+ <hr>
63
+
64
+ <div align="center" style="line-height:1">
65
+ <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/πŸ€–%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
66
+ <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
67
+ </div>
68
+
69
+ <div align="center" style="line-height: 1;">
70
+ <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
71
+ <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
72
+ <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
73
+ </div>
74
+ <div align="center" style="line-height: 1;">
75
+ <a href="https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
76
+ </div>
77
+
78
+ <p align="center">
79
+ <b>πŸ“°&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/thinking.html">Tech Blog</a></b>
80
+ </p>
81
+
82
+
83
+ ## 1. Model Introduction
84
+
85
+ Kimi K2 Thinking is the latest, most capable version of open-source thinking model. Starting with Kimi K2, we built it as a thinking agent that reasons step-by-step while dynamically invoking tools. It sets a new state-of-the-art on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks by dramatically scaling multi-step reasoning depth and maintaining stable tool-use across 200–300 sequential calls. At the same time, K2 Thinking is a native INT4 quantization model with 256k context window, achieving lossless reductions in inference latency and GPU memory usage.
86
+
87
+ ### Key Features
88
+ - **Deep Thinking & Tool Orchestration**: End-to-end trained to interleave chain-of-thought reasoning with function calls, enabling autonomous research, coding, and writing workflows that last hundreds of steps without drift.
89
+ - **Native INT4 Quantization**: Quantization-Aware Training (QAT) is employed in post-training stage to achieve lossless 2x speed-up in low-latency mode.
90
+ - **Stable Long-Horizon Agency**: Maintains coherent goal-directed behavior across up to 200–300 consecutive tool invocations, surpassing prior models that degrade after 30–50 steps.
91
+
92
+
93
+ ## 2. Model Summary
94
+
95
+ <div align="center">
96
+
97
+
98
+ | | |
99
+ |:---:|:---:|
100
+ | **Architecture** | Mixture-of-Experts (MoE) |
101
+ | **Total Parameters** | 1T |
102
+ | **Activated Parameters** | 32B |
103
+ | **Number of Layers** (Dense layer included) | 61 |
104
+ | **Number of Dense Layers** | 1 |
105
+ | **Attention Hidden Dimension** | 7168 |
106
+ | **MoE Hidden Dimension** (per Expert) | 2048 |
107
+ | **Number of Attention Heads** | 64 |
108
+ | **Number of Experts** | 384 |
109
+ | **Selected Experts per Token** | 8 |
110
+ | **Number of Shared Experts** | 1 |
111
+ | **Vocabulary Size** | 160K |
112
+ | **Context Length** | 256K |
113
+ | **Attention Mechanism** | MLA |
114
+ | **Activation Function** | SwiGLU |
115
+ </div>
116
+
117
+ ## 3. Evaluation Results
118
+
119
+ **Reasoning Tasks**
120
+ | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 | Grok-4 |
121
+ |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|:-------:|
122
+ | **HLE (Text-only)** | no tools | 23.9 | 26.3 | 19.8* | 7.9 | 19.8 | 25.4 |
123
+ | | w/ tools | 44.9 | 41.7* | 32.0* | 21.7 | 20.3* | 41.0 |
124
+ | | heavy | 51.0 | 42.0 | - | - | - | 50.7 |
125
+ | **AIME25** | no tools | 94.5 | 94.6 | 87.0 | 51.0 | 89.3 | 91.7 |
126
+ | | w/ python | 99.1 | 99.6 | 100.0 | 75.2 | 58.1* | 98.8 |
127
+ | | heavy | 100.0 | 100.0 | - | - | - | 100.0 |
128
+ | **HMMT25** | no tools | 89.4 | 93.3 | 74.6* | 38.8 | 83.6 | 90.0 |
129
+ | | w/ python | 95.1 | 96.7 | 88.8* | 70.4 | 49.5* | 93.9 |
130
+ | | heavy | 97.5 | 100.0 | - | - | - | 96.7 |
131
+ | **IMO-AnswerBench** | no tools | 78.6 | 76.0* | 65.9* | 45.8 | 76.0* | 73.1 |
132
+ | **GPQA** | no tools | 84.5 | 85.7 | 83.4 | 74.2 | 79.9 | 87.5 |
133
+
134
+ **General Tasks**
135
+ | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
136
+ |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
137
+ | **MMLU-Pro** | no tools | 84.6 | 87.1 | 87.5 | 81.9 | 85.0 |
138
+ | **MMLU-Redux** | no tools | 94.4 | 95.3 | 95.6 | 92.7 | 93.7 |
139
+ | **Longform Writing** | no tools | 73.8 | 71.4 | 79.8 | 62.8 | 72.5 |
140
+ | **HealthBench** | no tools | 58.0 | 67.2 | 44.2 | 43.8 | 46.9 |
141
+
142
+ **Agentic Search Tasks**
143
+ | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
144
+ |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
145
+ | **BrowseComp** | w/ tools | 60.2 | 54.9 | 24.1 | 7.4 | 40.1 |
146
+ | **BrowseComp-ZH** | w/ tools | 62.3 | 63.0* | 42.4* | 22.2 | 47.9 |
147
+ | **Seal-0** | w/ tools | 56.3 | 51.4* | 53.4* | 25.2 | 38.5* |
148
+ | **FinSearchComp-T3** | w/ tools | 47.4 | 48.5* | 44.0* | 10.4 | 27.0* |
149
+ | **Frames** | w/ tools | 87.0 | 86.0* | 85.0* | 58.1 | 80.2* |
150
+
151
+ **Coding Tasks**
152
+ | Benchmark | Setting | K2 Thinking | GPT-5<br> (High) | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
153
+ |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
154
+ | **SWE-bench Verified** | w/ tools | 71.3 | 74.9 | 77.2 | 69.2 | 67.8 |
155
+ | **SWE-bench Multilingual** | w/ tools | 61.1 | 55.3* | 68.0 | 55.9 | 57.9 |
156
+ | **Multi-SWE-bench** | w/ tools | 41.9 | 39.3* | 44.3 | 33.5 | 30.6 |
157
+ | **SciCode** | no tools | 44.8 | 42.9 | 44.7 | 30.7 | 37.7 |
158
+ | **LiveCodeBenchV6** | no tools | 83.1 | 87.0* | 64.0* | 56.1* | 74.1 |
159
+ | **OJ-Bench (cpp)** | no tools | 48.7 | 56.2* | 30.4* | 25.5* | 38.2* |
160
+ | **Terminal-Bench** | w/ simulated tools (JSON) | 47.1 | 43.8 | 51.0 | 44.5 | 37.7 |
161
+ <details>
162
+ <summary><b>Footnotes</b></summary>
163
+
164
+ 1. To ensure a fast, lightweight experience, we selectively employ a subset of tools and reduce the number of tool call steps under the chat mode on kimi.com. As a result, chatting on kimi.com may not reproduce our benchmark scores. Our agentic mode will be updated soon to reflect the full capabilities of K2 Thinking.
165
+
166
+ 2. **Testing Details**:
167
+  2.1. All benchmarks were evaluated at temperature = 1.0 and 256 k context length for K2 Thinking, except for SciCode, for which we followed the official temperature setting of 0.0.
168
+  2.2. HLE (no tools), AIME25, HMMT25, and GPQA were capped at a 96k thinking-token budget, while IMO-Answer Bench, LiveCodeBench and OJ-Bench were capped at a 128k thinking-token budget. Longform Writing was capped at a 32k completion-token budget.
169
+  2.3. For AIME and HMMT (no tools), we report the average of 32 runs (avg@32). For AIME and HMMT (with Python), we report the average of 16 runs (avg@16). For IMO-AnswerBench, we report the average of 8 runs (avg@8).
170
+
171
+ 3. **Baselines**:
172
+  3.1 GPT-5, Claude-4.5-sonnet, Grok-4 results and DeepSeek-V3.2 results are quoted from the [GPT-5 post](https://openai.com/index/introducing-gpt-5/), [GPT-5 for Developers post](https://openai.com/index/introducing-gpt-5-for-developers/), [GPT-5 system card](https://openai.com/index/gpt-5-system-card/), [claude-sonnet-4-5 post](https://www.anthropic.com/news/claude-sonnet-4-5), [grok-4 post](https://x.ai/news/grok-4), [deepseek-v3.2 post](https://api-docs.deepseek.com/news/news250929), the [public Terminal-Bench leaderboard](https://www.tbench.ai/leaderboard) (Terminus-2), the [public Vals AI leaderboard](https://vals.ai/) and [artificialanalysis](https://artificialanalysis.ai/). Benchmarks for which no available public scores were re-tested under the same conditions used for k2 thinking and are marked with an asterisk(*). For the GPT-5 test, we set the reasoning effort to high.
173
+  3.2 The GPT-5 and Grok-4 on the HLE full set with tools are 35.2 and 38.6 from the official posts. In our internal evaluation on the HLE text-only subset, GPT-5 scores 41.7 and Grok-4 scores 38.6 (Grok-4’s launch cited 41.0 on the text-only subset). For GPT-5's HLE text-only w/o tool, we use score from <a href="https://scale.com/leaderboard/humanitys_last_exam_text_only" target="_blank">Scale.ai</a>. The official GPT5 HLE full set w/o tool is 24.8.
174
+  3.3 For <a href="https://aclanthology.org/2025.emnlp-main.1794.pdf" target="_blank">IMO-AnswerBench</a>: GPT-5 scored 65.6 in the benchmark paper. We re-evaluated GPT-5 with official API and obtained a score of 76.
175
+
176
+ 4. **For HLE (w/ tools) and the agentic-search benchmarks**:
177
+  4.1. K2 Thinking was equipped with search, code-interpreter, and web-browsing tools.
178
+  4.2. BrowseComp-ZH, Seal-0 and FinSearchComp-T3 were run 4 times independently and the average is reported (avg@4).
179
+  4.3. The evaluation used o3-mini as judge, configured identically to the official HLE setting; judge prompts were taken verbatim from the official repository.
180
+  4.4. On HLE, the maximum step limit was 120, with a 48 k-token reasoning budget per step; on agentic-search tasks, the limit was 300 steps with a 24 k-token reasoning budget per step.
181
+  4.5. When tool execution results cause the accumulated input to exceed the model's context limit (256k), we employ a simple context management strategy that hides all previous tool outputs.
182
+  4.6. The web access to Hugging Face may lead to data leakage in certain benchmark tests, such as HLE. K2 Thinking can achieve a score of 51.3 on HLE without blocking Hugging Face. To ensure a fair and rigorous comparison, we blocked access to Hugging Face during testing.
183
+
184
+ 5. **For Coding Tasks**:
185
+  5.1. Terminal-Bench scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser.
186
+  5.2. For other coding tasks, the result was produced with our in-house evaluation harness. The harness is derived from SWE-agent, but we clamp the context windows of the Bash and Edit tools and rewrite the system prompt to match the task semantics.
187
+  5.3. All reported scores of coding tasks are averaged over 5 independent runs.
188
+
189
+ 6. **Heavy Mode**: K2 Thinking Heavy Mode employs an efficient parallel strategy: it first rolls out eight trajectories simultaneously, then reflectively aggregates all outputs to generate the final result. Heavy mode for GPT-5 denotes the official GPT-5 Pro score.
190
+ </details>
191
+
192
+ ## 4. Native INT4 Quantization
193
+
194
+ Low-bit quantization is an effective way to reduce inference latency and GPU memory usage on large-scale inference servers. However, thinking models use excessive decoding lengths, and thus quantization often results in substantial performance drops.
195
+
196
+ To overcome this challenge, we adopt Quantization-Aware Training (QAT) during the post-training phase, applying INT4 weight-only quantization to the MoE components. It allows K2 Thinking to support native INT4 inference with a roughly 2x generation speed improvement while achieving state-of-the-art performance. All benchmark results are reported under INT4 precision.
197
+
198
+ The checkpoints are saved in compressed-tensors format, supported by most of mainstream inference engine. If you need the checkpoints in higher precision such as FP8 or BF16, you can refer to [official repo of compressed-tensors](https://github.com/vllm-project/compressed-tensors) to unpack the int4 weights and convert to any higher precision.
199
+
200
+ ## 5. Deployment
201
+ > [!Note]
202
+ > You can access K2 Thinking's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
203
+
204
+ Currently, Kimi-K2-Thinking is recommended to run on the following inference engines:
205
+
206
+ * vLLM
207
+ * SGLang
208
+ * KTransformers
209
+
210
+ Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
211
+
212
+ ---
213
+
214
+ ## 6. Model Usage
215
+
216
+ ### Chat Completion
217
+
218
+ Once the local inference service is up, you can interact with it through the chat endpoint:
219
+
220
+ ```python
221
+ def simple_chat(client: openai.OpenAI, model_name: str):
222
+ messages = [
223
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
224
+ {"role": "user", "content": [{"type": "text", "text": "which one is bigger, 9.11 or 9.9? think carefully."}]},
225
+ ]
226
+ response = client.chat.completions.create(
227
+ model=model_name,
228
+ messages=messages,
229
+ stream=False,
230
+ temperature=1.0,
231
+ max_tokens=4096
232
+ )
233
+ print(f"k2 answer: {response.choices[0].message.content}")
234
+ print("=====below is reasoning content======")
235
+ print(f"reasoning content: {response.choices[0].message.reasoning_content}")
236
+ ```
237
+
238
+ > [!NOTE]
239
+ > The recommended temperature for Kimi-K2-Thinking is `temperature = 1.0`.
240
+ > If no special instructions are required, the system prompt above is a good default.
241
+
242
+ ---
243
+
244
+ ### Tool Calling
245
+
246
+ Kimi-K2-Thinking has the same tool calling settings as Kimi-K2-Instruct.
247
+
248
+ To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
249
+
250
+ The following example demonstrates calling a weather tool end-to-end:
251
+
252
+ ```python
253
+ # Your tool implementation
254
+ def get_weather(city: str) -> dict:
255
+ return {"weather": "Sunny"}
256
+ # Tool schema definition
257
+ tools = [{
258
+ "type": "function",
259
+ "function": {
260
+ "name": "get_weather",
261
+ "description": "Retrieve current weather information. Call this when the user asks about the weather.",
262
+ "parameters": {
263
+ "type": "object",
264
+ "required": ["city"],
265
+ "properties": {
266
+ "city": {
267
+ "type": "string",
268
+ "description": "Name of the city"
269
+ }
270
+ }
271
+ }
272
+ }
273
+ }]
274
+ # Map tool names to their implementations
275
+ tool_map = {
276
+ "get_weather": get_weather
277
+ }
278
+ def tool_call_with_client(client: OpenAI, model_name: str):
279
+ messages = [
280
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
281
+ {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
282
+ ]
283
+ finish_reason = None
284
+ while finish_reason is None or finish_reason == "tool_calls":
285
+ completion = client.chat.completions.create(
286
+ model=model_name,
287
+ messages=messages,
288
+ temperature=1.0,
289
+ tools=tools, # tool list defined above
290
+ tool_choice="auto"
291
+ )
292
+ choice = completion.choices[0]
293
+ finish_reason = choice.finish_reason
294
+ if finish_reason == "tool_calls":
295
+ messages.append(choice.message)
296
+ for tool_call in choice.message.tool_calls:
297
+ tool_call_name = tool_call.function.name
298
+ tool_call_arguments = json.loads(tool_call.function.arguments)
299
+ tool_function = tool_map[tool_call_name]
300
+ tool_result = tool_function(**tool_call_arguments)
301
+ print("tool_result:", tool_result)
302
+ messages.append({
303
+ "role": "tool",
304
+ "tool_call_id": tool_call.id,
305
+ "name": tool_call_name,
306
+ "content": json.dumps(tool_result)
307
+ })
308
+ print("-" * 100)
309
+ print(choice.message.content)
310
+ ```
311
+
312
+ The `tool_call_with_client` function implements the pipeline from user query to tool execution.
313
+ This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
314
+ For more information, see the [Tool Calling Guide](docs/tool_call_guidance.md).
315
+
316
+ ---
317
+
318
+ ## 7. License
319
+
320
+ Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
321
+
322
+ ---
323
+
324
+ ## 8. Third Party Notices
325
+
326
+ See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
327
+
328
+ ---
329
+
330
+ ## 9. Contact Us
331
+
332
+ If you have any questions, please reach out at [[email protected]](mailto:[email protected]).