Z-Image Base GGUF

GGUF quantized version of Tongyi-MAI/Z-Image (Alibaba's 6B parameter diffusion model) for use with ComfyUI-GGUF.

Model Information

Property Value
Base Model Tongyi-MAI/Z-Image
Architecture Lumina2 (DiT-based)
Parameters ~6B
Type Non-distilled (supports CFG, negative prompts, LoRA)
Recommended Steps 28-50

Available Quantizations

File Size VRAM Required Quality
z_image_base_Q8_0.gguf 6.8 GB ~7-8 GB Best
z_image_base_BF16.gguf 12.4 GB ~13 GB Original

Usage with ComfyUI

Requirements

  1. ComfyUI
  2. ComfyUI-GGUF custom nodes

Installation

  1. Install ComfyUI-GGUF:
cd ComfyUI/custom_nodes
git clone https://github.com/city96/ComfyUI-GGUF
pip install --upgrade gguf
  1. Download the GGUF file and place it in:
ComfyUI/models/unet/
  1. Use the "Unet Loader (GGUF)" node instead of the standard model loader.

Credits

License

Apache 2.0 (same as original Z-Image model)

Downloads last month
3,977
GGUF
Model size
6B params
Architecture
lumina2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for babakarto/z-image-base-gguf

Base model

Tongyi-MAI/Z-Image
Quantized
(11)
this model