Confucius3-Math: A Lightweight High-Performance Reasoning LLM for Chinese K-12 Mathematics Learning
Paper
•
2506.18330
•
Published
•
4
This model was converted to GGUF format from netease-youdao/Confucius3-Math using llama.cpp.
Refer to the original model card for more details on the model.
We provide multiple versions of GGUF, which are stored in the corresponding subdirectories respectively. However, it should be noted that we have only evaluated the quality of the BF16 precision.
Before running the model, please compile and install llama.cpp first.
Since the models we uploaded have been sliced, you need to execute the following commands to merge the models before running them.
./build/bin/llama-gguf-split --merge netease-youdao/Confucius3-Math-GGUF/confucius3-math-bf16-00001-of-00008.gguf confucius3-math-bf16.gguf
./build/bin/llama-cli -m confucius3-math-bf16.gguf
If you find our work helpful, feel free to give us a cite.
@misc{confucius3-math,
author = {NetEase Youdao Team},
title = {Confucius3-Math: A Lightweight High-Performance Reasoning LLM for Chinese K-12 Mathematics Learning},
url = {https://arxiv.org/abs/2506.18330},
month = {June},
year = {2025}
}
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B