Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting Paper • 2404.18911 • Published Apr 29, 2024 • 30
Revealing the Power of Post-Training for Small Language Models via Knowledge Distillation Paper • 2509.26497 • Published Sep 30
ROOT: Robust Orthogonalized Optimizer for Neural Network Training Paper • 2511.20626 • Published 13 days ago • 169
ROOT: Robust Orthogonalized Optimizer for Neural Network Training Paper • 2511.20626 • Published 13 days ago • 169
ROOT: Robust Orthogonalized Optimizer for Neural Network Training Paper • 2511.20626 • Published 13 days ago • 169 • 4
Benchmarking Optimizers for Large Language Model Pretraining Paper • 2509.01440 • Published Sep 1 • 24
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting Paper • 2404.18911 • Published Apr 29, 2024 • 30
DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models Paper • 2403.00818 • Published Feb 26, 2024 • 19
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets Paper • 2010.14819 • Published Oct 28, 2020
GhostNetV2: Enhance Cheap Operation with Long-Range Attention Paper • 2211.12905 • Published Nov 23, 2022