MiniPLM: Knowledge Distillation for Pre-Training Language Models
Paper
•
2410.17215
•
Published
•
16
Pretrain-Qwen-500M is a 500M model with Qwen achitecture conventionally pre-trained from scratch on the Pile for 50B tokens.
We also open-source the tokenized pre-training corpus for reproducibility.
It is used as the baseline for MiniLLM-Qwen-500M
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}