Sabomako/gemma-3-12b-it-heretic
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 9, 2026Architecture:Transformer0.0K Warm

Sabomako/gemma-3-12b-it-heretic is a 12 billion parameter instruction-tuned language model, derived from Google's Gemma-3-12b-it. This model has been modified using the Heretic v1.2.0 tool with Magnitude-Preserving Orthogonal Ablation (MPOA) to reduce content refusals. It maintains a low KL divergence of 0.024 compared to the original, while significantly decreasing refusal rates, making it suitable for applications requiring less restrictive content generation.

Loading preview...