Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models
This repository contains the quantized models and associated code for Quamba2, presented in the paper Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models.
Paper: https://huggingface.co/papers/2503.22879 Project Page: https://hychiang.info/projects/quamba2 Code: https://github.com/enyac-group/Quamba
Abstract
State Space Models (SSMs) are emerging as a compelling alternative to Transformers because of their consistent memory usage and high performance. Despite this, scaling up SSMs on cloud services or limited-resource devices is challenging due to their storage requirements and computational power. To overcome this, quantizing SSMs with low bit-width data formats can reduce model size and benefit from hardware acceleration. As SSMs are prone to quantization-induced errors, recent efforts have focused on optimizing a particular model or bit-width for efficiency without sacrificing performance. However, distinct bit-width configurations are essential for different scenarios, like W4A8 for boosting large-batch decoding speed, and W4A16 for enhancing generation speed in short prompt applications for a single user. To this end, we present Quamba2, compatible with W8A8, W4A8, and W4A16 for both Mamba1 and Mamba2 backbones, addressing the growing demand for SSM deployment on various platforms. Based on the channel order preserving and activation persistence of SSMs, we propose an offline approach to quantize inputs of a linear recurrence in 8-bit by sorting and clustering for input $x$, combined with a per-state-group quantization for input-dependent parameters $B$ and $C$. To ensure compute-invariance in the SSM output, we rearrange weights offline according to the clustering sequence. The experiments show that Quamba2-8B outperforms two state-of-the-art SSM quantization methods and delivers 1.3$\times$ and 3$\times$ speed-ups in the pre-filling and generation stages, respectively, while offering 4$\times$ memory reduction with only a $1.6\%$ average accuracy drop. The evaluation on MMLU shows the generalizability and robustness of our framework.
Highlights
- Supports W4A8 / W4A16 / W4AX / W8A8 for Mamba1 and Mamba2.
- Achieves 4$\times$ memory reduction.
- Delivers 1.3$\times$ and 3$\times$ speed-ups in the pre-filling and generation stages, respectively, with only a 1.6% average accuracy drop.
- Achieves 13 tokens per second on Orin Nano 8G with Mamba2-8b.
Real-time Generation on a NVIDIA Orin Nano 8G
Setup and Installation
For detailed instructions on hardware and software requirements, cloning the repository, setting up the environment, and building 3rd-party libraries, please refer to the GitHub repository's setup guide.
Sample Usage: Generate
After setting up the environment and downloading a quantized model (e.g., ut-enyac/quamba2-2.7b-w4a8), you can generate text using the provided script:
python generate.py ut-enyac/quamba2-2.7b-w4a8 --prompt "My cat wrote all this CUDA code for a new language model and" --topp 0.9 --temperature 0.7 --repetition_penalty 1.2 --quantize --cache_graph --pretrained_dir pretrained_models
Citation
If you find our work helpful or inspiring, please feel free to cite it:
@inproceedings{chiang2025quamba2,
title = {Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models},
author = {Chiang, Hung-Yueh and Chang, Chi-Chih and Frumkin, Natalia and Wu, Kai-Chiang, Abdelfattah, Mohamed S. and Marculescu, Diana},
booktitle = {Forty-Second International Conference on Machine Learning (ICML)},
year = {2025}
}
@inproceedings{chiang2025quamba,
title = {Quamba: A Post-Training Quantization Recipe for Selective State Space Models},
author = {Chiang*, Hung-Yueh and Chang*, Chi-Chih and Frumkin, Natalia and Wu, Kai-Chiang and Marculescu, Diana},
booktitle = {The Thirteenth International Conference on Learning Representations (ICLR)},
year = {2025},
}
- Downloads last month
- 8
