Moxin x llama.cpp Customized Quant for Qwen3-235B-A22B-Instruct-2507

We sincerely thank the open-source community developers and contributors unsloth for providing BF16 version and imatrix file.

We really appreciate the attention and weโ€™re also happy to share additional quantization variants for everyone to try out and experiment with โ€” hope you enjoy them!

- Q2_K_XL : 82.91 GiB (3.03 BPW)
- Q4_K_XL : 143.13 GiB (4.90 BPW)
- Q8_0 : 232.77 GiB (8.51 BPW)
- Other Quant Versions (TBD)
๐Ÿ‘ˆ Download Guide
huggingface-cli download moxin-org/Qwen3-235B-A22B-Instruct-2507-GGUF --include "*Q4_K_XL*" --local-dir ./Qwen3-235B-A22B-Instruct-2507-GGUF
# !pip install huggingface_hub hf_transfer
import os
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "moxin-org/Qwen3-235B-A22B-Instruct-2507-GGUF",
    local_dir = "Qwen3-235B-A22B-Instruct-2507-GGUF",
    allow_patterns = ["*Q4_K_XL*"], 
)

Download Available for huggingface_hub, huggingface-cli, snapshot_download, xet.

Usage

Example of runing gguf with local build of llama.cpp. (llama-cli/llama-server)

๐Ÿ‘ˆ Build llama.cpp locally
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp

# -DLLAMA_CURL=OFF if error
cmake -B build -DGGML_CUDA=ON -DBUILD_SHARED_LIBS=OFF 
cmake --build build --config Release -j --clean-first
build/bin/llama-cli -m Qwen3-235B-A22B-Instruct-2507-GGUF/Moxin-Q4_K_XL/Qwen3-235B-A22B-Instruct-2507-Q4_K_XL-00001-of-00006.gguf \
  -ngl 99 \
  --temp 0.7 \
  --top-k 20 \
  --top-p 0.8 \
  --min-p 0.01 \
  --ctx-size 8192 \ # 4096, 16384

Citation

If this work is helpful, please kindly helpe cite as:

@article{chen2025collaborative,
  title={Collaborative Compression for Large-Scale MoE Deployment on Edge},
  author={Chen, Yixiao and Xie, Yanyue and Yang, Ruining and Jiang, Wei and Wang, Wei and He, Yong and Chen, Yue and Zhao, Pu and Wang, Yanzhi},
  journal={arXiv preprint arXiv:2509.25689},
  year={2025}
}

Acknowledgements

This repository builds upon the outstanding work of the following open-source authors and projects:

We sincerely thank them for their excellent contributions to the open-source community.

Downloads last month
516
GGUF
Model size
235B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for moxin-org/Qwen3-235B-A22B-Instruct-2507-GGUF

Quantized
(55)
this model

Collection including moxin-org/Qwen3-235B-A22B-Instruct-2507-GGUF