Submitted by yfdeng 51 MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head DAGroup-PKU 104 3