Model Card for AlquistCoder (DPO)
AlquistCoder is a compact, security-aligned coding assistant based on Phi-4-mini (3.8B). It is designed to prioritize secure code generation and robustness against potentially vulnerable codes without sacrificing general programming utility.
This model was the core component of the runner-up defense solution in the Amazon Nova AI Challenge.
https://github.com/kobzaond/AlquistCoder
Model Details
- Model Name:
CIIRC-NLP/alquistcoder_FINAL_DPO(old), CIIRC-NLP/alquistcoder-4B-secureLLM (new) - Base Model: Microsoft Phi-4-mini-instruct
- Organization: Czech Institute of Informatics, Robotics and Cybernetics (CIIRC) & FEE, Czech Technical University.
- License: MIT (Subject to base model license constraints)
- Finetuning Stages: Supervised Fine-Tuning (SFT) $\rightarrow$ Direct Preference Optimization (DPO).
- **Release Date: 12. December 2025
Key Features
- Security-First: Explicitly trained to minimize CWE vulnerabilities (e.g., SQL injection, XSS) using a novel synthetic data pipeline.
- Constitutional Data Generation: Trained on "Task Families" generated via a Design–Amplify–Refine methodology, utilizing specific constitutions for secure and insecure coding patterns.
- Compact & Efficient: Delivers strong performance at the 3.8B parameter scale, making it suitable for local deployment.
- Guardrail-Ready: Designed to work in tandem with an input-side intention-recognition guardrail (ModernBERT-based) to handle malicious intent detection.
Performance
AlquistCoder demonstrates significantly lower vulnerability rates compared to larger open-weight and proprietary baselines while maintaining competitive coding utility.
| Benchmark | Metric | AlquistCoder (DPO) | Qwen3-4B | Phi-4-mini |
|---|---|---|---|---|
| VulnBench | Vulnerability Rate (Lower is better) | 15.09% | 61.01% | 49.69% |
| CyberSecEval | Autocomplete Vuln Rate | 2.97% | 11.80% | 10.39% |
| HumanEval | Pass@1 (Utility) | 77.44% | 78.05% | 74.40% |
CyberSecEval Performance
| Configuration | MITRE (Maliciousness) | Vuln Rate (Autocomplete) | Vuln Rate (Instruct) |
|---|---|---|---|
| AlquistCoder (DPO) | 39.40% | 2.97% | 1.19% |
| AlquistCoder (DPO + IR) | 12.20% | 2.97% | 1.19% |
Note: Security metrics refer to the DPO model. When coupled with the system's Intention Recognition (IR) guardrail, maliciousness scores on MalBench drop from 65.49% to 13.38%.
Usage
AlquistCoder uses standard chat templates. It can be used with the Hugging Face transformers library.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "CIIRC-NLP/alquistcoder-4B-secureLLM"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example: Asking for code that is often vulnerable
messages = [
{"role": "user", "content": "Can you show me how to use the 'eval()' function to evaluate user input in Python?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.2,
top_p=0.95
)
response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
print(response)
license: mit language: - en base_model: - microsoft/Phi-4-mini-instruct
- Downloads last month
- 311