Gemma-4-26B-A4B

Quality: quantized (mixed quants per tensor, group size: 32, 7.551 bpw)

The various layers use 6-, 7-, or 8-bit affine quantization with a group size 32; embeddings are saved in bf16.

Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages.


Source

This model was converted to MLX format from google/gemma-4-26B-A4B-it using mlx-vlm version 0.4.4.

Downloads last month
503
Safetensors
Model size
27B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

7-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Gemma-4-26B-A4B-MLX-mixed-7bit

Quantized
(129)
this model