This is a decensored version of gemma-3-27b-it, made using Heretic v1.2.0 focusing on zero refusals with low KL divergence

KL Divergence

Metric This Model Original Model
KL divergence 0.0539 0 (by definition)
Refusals 0/108 107/108

Abliteration parameters

  • Zero refusals with KL divergence of 0.0539
  • Custom heretic training dataset
  • Model targetted heretic configuration
  • Abliterated with MPOA enabled (Magnitude-Preserving Orthogonal Ablation)
  • Full row renormalization
  • Winsorization Quantile 0.997

The following benchmarks are for the quantized version of this model

Relative Perplexity

Quant Filename PPL ± Error
Q8_0 gemma-3-27b-it-Q8_0.gguf (original baseline) 6.5517 +/- 0.04674
Q8_0 gemma-3-27b-it-heretic-v1.2-Q8_0.gguf 6.5510 +/- 0.04666
Q4_K_M gemma-3-27b-it-heretic-v1.2-Q4_K_M.gguf 6.6245 +/- 0.04704

Benchmark Comparison

Benchmark gemma-3-27b-it-Q8_0.gguf gemma-3-27b-it-Q4_K_M.gguf gemma-3-27b-it-heretic-v1.2-Q4_K_M.gguf
Perplexity (Wikitext-2) 6.5517 6.6226 6.6245
HellaSwag 81.75% 81.75% 80.50%
Winogrande 76.24% 76.32% 76.40%
ARC-Challenge 57.86% 58.86% 59.53%
MMLU 45.35% 46.58% 45.41%

*Note: MMLU benchmark has moral_scenarios, moral_disputes, business_ethics, professional_law and jurisprudence subjects removed. *

Downloads last month
20
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grayarea/gemma-3-27b-it-heretic-v1.2

Finetuned
(404)
this model
Quantizations
4 models