Model Overview
DavidAU/gemma-3-12b-it-vl-GPT-5.1-High-Heretic-Uncensored-Thinking is a 12 billion parameter Gemma 3 instruction-tuned model, fine-tuned using a GPT 5.1 High reasoning dataset. Developed with Unsloth on local Linux hardware, this model is designed for deep, uncensored thinking and output generation, aiming to provide direct and detailed responses without refusal.
Key Capabilities & Features
- Uncensored Reasoning: Provides full deep thinking and uncensored outputs, designed to fulfill user requests without refusal.
- Enhanced Reasoning: Reasoning is compact, detailed, and stable across a temperature range of 0.1 to 2.5. It affects general model operation, output generation, image processing, and benchmarks.
- Extended Context: Features a 128k context window.
- Flexible Activation: Thinking can be activated automatically due to fine-tuning, or explicitly via "think deeply: prompt" or specialized Jinja templates.
- Improved Benchmarks: Shows improved performance over its Heretic uncensored base model across various benchmarks (e.g., arc_challenge, hellaswag, piqa).
- Low Refusal Rate: Achieves a refusal rate of 7/100, significantly lower than the original Google Gemma 3 model's 98/100, with a low KL divergence of 0.0826.
Good For
- Unrestricted Content Generation: Ideal for use cases requiring explicit, nuanced, or uncensored content, including creative writing, roleplay, or scenarios where models typically refuse requests.
- Deep Reasoning Tasks: Suitable for applications benefiting from detailed and direct reasoning, including those involving image processing.
- Customizable Behavior: Users can influence reasoning and output generation through optional system prompts or by adjusting Jinja templates for always-on thinking.