DreamFast/gemma-3-12b-it-heretic is an abliterated version of Google's Gemma 3 12B IT model, processed using the Heretic tool to significantly reduce refusals from 100/100 to 7/100 while maintaining model quality with a KL divergence of 0.0826. This 12 billion parameter model is primarily optimized as an uncensored text encoder for video generation models like LTX-2, ensuring more faithful adherence to creative prompts by removing soft censorship. It is available in various quantization formats including GGUF for llama.cpp and ComfyUI formats.
Loading preview...
Gemma 3 12B IT - Heretic (Abliterated)
This model is an abliterated version of Google's Gemma 3 12B IT, created by DreamFast using the Heretic tool. Its primary purpose is to reduce model refusals and soft censorship, making it a more faithful text encoder, especially for video generation models like LTX-2.
Key Capabilities
- Reduced Refusals: Significantly lowers refusal rates from 100/100 to 7/100 compared to the original base model, enabling broader prompt acceptance.
- Minimal Model Damage: Achieves a low KL Divergence of 0.0826, indicating that the abliteration process preserves the model's core quality.
- Uncensored Text Encoding: Removes inherent sanitization, allowing for more direct and faithful interpretation of creative prompts for downstream applications.
- Versatile Formats: Provided in HuggingFace, ComfyUI (bf16, fp8), and various GGUF quantizations (F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M recommended) for diverse deployment scenarios.
Good For
- Video Generation: Ideal as a text encoder for models like LTX-2, where uncensored and faithful prompt adherence is crucial for visual output.
- Creative Applications: Suitable for scenarios requiring less restrictive content generation, ensuring prompts are interpreted without implicit softening or alteration.
- Research into Model Behavior: Useful for exploring the impact of censorship removal on LLM outputs and downstream tasks.
Note: A newer version, Gemma 3 12b it Heretic v2, is available with vision capabilities and NVFP4 support, which is generally recommended over this v1 model.