DavidAU/gemma-3-12b-it-vl-Deepseek-v3.1-Heretic-Uncensored-Thinking
DavidAU/gemma-3-12b-it-vl-Deepseek-v3.1-Heretic-Uncensored-Thinking is a 12 billion parameter Gemma fine-tune, developed by DavidAU, featuring a 128k context window. This model is specifically designed for uncensored, deep reasoning, leveraging the Deepseek 3.1 reasoning dataset. It excels at generating detailed, direct outputs and enhances image processing through its reasoning capabilities, making it suitable for use cases requiring unrestricted and precise responses.
Loading preview...
Overview
DavidAU/gemma-3-12b-it-vl-Deepseek-v3.1-Heretic-Uncensored-Thinking is a 12 billion parameter Gemma fine-tune, developed by DavidAU, focusing on uncensored and deep reasoning. It utilizes the Deepseek 3.1 reasoning dataset and is trained via Unsloth. The model features a 128k context window and stable reasoning across a temperature range of 0.1 to 2.5.
Key Capabilities
- Uncensored Output: Designed to provide direct responses without refusal, even for sensitive content, though it may require explicit direction for graphic or explicit levels.
- Enhanced Reasoning: Improves general model operation, output generation, image processing, and benchmark performance. Reasoning can be activated explicitly with "think deeply: prompt" or through optional system prompts and a specialized Jinja template.
- High KL Divergence Score: Achieves a KL divergence of 0.0826, indicating minimal damage to the original model's distribution during de-censoring, with significantly reduced refusals (7/100 compared to 98/100 for the original).
Good For
- Applications requiring unrestricted and direct content generation.
- Use cases where deep and detailed reasoning is critical for accurate and precise outputs.
- Scenarios benefiting from enhanced image processing through advanced reasoning.
- Developers seeking a model with flexible control over output style through system prompts and thinking activation mechanisms.