Qwen3.5-0.8B Vision OCR β 16-bit LoRA Adapter
A fine-tuned LoRA adapter on top of unsloth/Qwen3.5-0.8B for document OCR and image-to-LaTeX conversion. The model processes document or formula images and outputs their LaTeX representation.
Trained with 16-bit LoRA (chosen over QLoRA for superior stability on Qwen3.5 vision architectures) on an NVIDIA A100-SXM4-80GB via Lightning.ai, using Unsloth for 2x faster fine-tuning.
GGUF version available:
Mustafaege/Qwen3.5-0.8B-GGUF-q4_k_mfor local inference with llama.cpp / Ollama.
Model Details
| Property | Value |
|---|---|
| Base Model | unsloth/Qwen3.5-0.8B |
| Model Type | Vision-Language (Qwen3.5), Causal LM |
| Fine-tune Method | 16-bit LoRA β more stable than QLoRA for Qwen3.5 vision layers |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 16 |
| LoRA Dropout | 0 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Training Dataset | Mustafaege/qwen3.5-vision-ocr-v1 |
| Training Framework | Unsloth + TRL SFTTrainer |
| Training Platform | Lightning.ai |
| Training Hardware | NVIDIA A100-SXM4-80GB (79.4 GB VRAM) |
| License | Apache 2.0 |
| Developed by | Mustafaege |
Why 16-bit LoRA Instead of QLoRA?
Qwen3.5 Vision uses specialized convolutional layers that are currently unstable under 4-bit quantization at training time. Switching to 16-bit LoRA avoids this instability while still being far more memory-efficient than full fine-tuning.
| Method | VRAM Usage | Stability | Quality |
|---|---|---|---|
| Full fine-tune | Very High | β Stable | Best |
| 16-bit LoRA (this model) | Medium | β Stable | Very Good |
| QLoRA (4-bit) | Low | β οΈ Unstable for Qwen3.5 vision | Degraded |
Intended Use
This adapter is designed for document understanding and OCR pipelines where a vision-language model must:
- Convert mathematical formulas and equations in images to LaTeX
- Transcribe handwritten or printed scientific notation
- Process structured document layouts (papers, textbooks, slides)
Out-of-Scope
- General-purpose visual question answering
- Natural scene understanding or image captioning
How to Get Started
Installation
pip install unsloth transformers peft trl torch pillow
Load and Run with Unsloth (Recommended)
from unsloth import FastVisionModel
from PIL import Image
model, tokenizer = FastVisionModel.from_pretrained(
model_name="Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit",
)
FastVisionModel.for_inference(model)
image = Image.open("formula.png")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Write the LaTeX representation for this image."},
],
}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
inputs = tokenizer(image, input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
# Example: \frac{d}{dx}\left(e^{x}\right) = e^{x}
Load with PEFT (Standard)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model_id = "unsloth/Qwen3.5-0.8B"
adapter_id = "Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype="auto",
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter_id)
model.eval()
Merge and Export (for GGUF conversion or deployment)
from unsloth import FastVisionModel
model, tokenizer = FastVisionModel.from_pretrained(
model_name="Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit",
)
# Merge LoRA into base weights
model.save_pretrained_merged("Qwen3.5-0.8B-vision-OCR-merged", tokenizer)
Training Details
Dataset
Fine-tuned on Mustafaege/qwen3.5-vision-ocr-v1 β a multimodal OCR dataset containing document and formula images paired with LaTeX ground-truth annotations.
Hyperparameters
| Parameter | Value |
|---|---|
| Learning Rate | 2e-4 |
| Batch Size (per device) | 4 |
| Gradient Accumulation Steps | 4 β Effective batch size: 16 |
| Warmup Steps | 10 |
| Weight Decay | 0.01 |
| Optimizer | AdamW 8-bit |
| Precision | bf16 |
| Gradient Checkpointing | Enabled (Unsloth) |
| Data Collator | UnslothVisionDataCollator |
Infrastructure
| Platform | Lightning.ai |
| GPU | NVIDIA A100-SXM4-80GB |
| VRAM Available | 79.4 GB |
| GPU Count | 1 |
| OS | Linux |
Related Resources
| Resource | Link |
|---|---|
| GGUF (Q4_K_M) for llama.cpp / Ollama | Mustafaege/Qwen3.5-0.8B-GGUF-q4_k_m |
| Training Dataset | Mustafaege/qwen3.5-vision-ocr-v1 |
| Base Model | unsloth/Qwen3.5-0.8B |
| Unsloth | github.com/unslothai/unsloth |
Limitations
- Optimized for document and formula images; performance degrades on natural scene images.
- Output quality depends on input image resolution and clarity.
- May struggle with very low-quality scans or heavily stylized fonts.
Citation
@misc{mustafaege2026qwen35visionocr,
title = {Qwen3.5-0.8B Vision OCR: 16-bit LoRA Adapter for Image-to-LaTeX},
author = {Mustafaege},
year = {2026},
url = {https://huggingface.co/Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit}
}
@misc{qwen3_5,
title = {Qwen3.5 Technical Report},
author = {Qwen Team},
year = {2025},
publisher = {Alibaba Cloud}
}
Trained 2x faster with Unsloth on Lightning.ai.
- Downloads last month
- 19