Experimental ControlNet (Low Quality / Research Prototype)
Experimental model. Low quality. Not intended for production use.
This ControlNet was trained as a research experiment to explore line-based conditioning and colorization behavior in SDXL anime models.
Model Summary
This repository contains an experimental ControlNet for SDXL, trained on anime-style images.
The model is not stable, shows inconsistent color behavior, and should be treated as a research prototype rather than a finished or polished solution.
The goal of this experiment was to understand:
- How SDXL ControlNet learns colorization from line-based conditioning
- How different conditioning types (Canny vs Lineart) affect color consistency
Base Model
- Base model:
cagliostrolab/animagine-xl-3.0 - Architecture: ControlNet SDXL
- Training framework: ๐ค Diffusers
- Precision:
bf16
Conditioning Type
- Primary conditioning: Lineart / Canny-like edges
- Backgrounds are mostly white
- Line quality varies (mostly clean, some noisy samples)
Important limitation:
Lineart / Canny does not contain color information, which leads to unstable and drifting color predictions.
Dataset
- Size: ~14,000 image pairs
- Format:
- Original image (color)
- Conditioning image (lineart / canny)
- Prompt (caption)
Known dataset issues
- Some lineart images are noisy or inconsistent
- Images are resized to square resolution (possible cropping artifacts)
- No explicit color supervision
- No palette or region-level color constraints
Training Configuration
Typical training setup:
resolution: 768
train_batch_size: 2
gradient_accumulation_steps: 2
effective_batch_size: 4
learning_rate: 2e-5
lr_scheduler: cosine
max_train_steps: 6000โ8000
mixed_precision: bf16
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for SubMaroon/ControlNet-anime-colorize
Base model
stabilityai/stable-diffusion-xl-base-1.0
Finetuned
Linaqruf/animagine-xl-2.0
Finetuned
cagliostrolab/animagine-xl-3.0