This is a MXFP4_MOE quantization of the model Intern-S1

Original model: https://huggingface.co/internlm/Intern-S1

This model's GGUF's have been removed, in order to conserve my repos use of space.
If you want it, just message me, and I will make it available on demand.

Downloads last month
147
GGUF
Model size
235B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Intern-S1-MXFP4_MOE-GGUF

Base model

internlm/Intern-S1
Quantized
(5)
this model