DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking
DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking is a 1 billion parameter Gemma-based instruction-tuned model developed by DavidAU. It is specifically fine-tuned for uncensored, deep reasoning and direct output generation, utilizing the GLM 4.7 reasoning dataset. This model excels at providing detailed, compact reasoning without refusals, making it suitable for applications requiring unfiltered responses. It features a 32k context length and stable reasoning across a wide temperature range.
Loading preview...
Model Overview
DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking is a 1 billion parameter Gemma-based instruction-tuned model, developed by DavidAU. It is distinguished by its fully uncensored and deep reasoning capabilities, achieved through fine-tuning with the GLM 4.7 reasoning dataset via Unsloth. The model is designed to provide direct, detailed, and compact reasoning without typical refusal behaviors.
Key Capabilities
- Uncensored Output: Engineered to generate content without refusals, even for sensitive or explicit requests, though it may require explicit directives for graphic or slang content.
- Deep Reasoning: Incorporates a "deep thinking" mechanism, which can be activated explicitly or automatically, leading to detailed and precise outputs.
- 32k Context Length: Supports extended conversational and analytical tasks.
- Temperature Stability: Reasoning remains stable across a wide temperature range (0.1 to 2.5).
- Performance: Achieves competitive benchmarks for its size, including 0.344 on arc_challenge and 0.720 on piqa.
- Low Refusal Rate: Demonstrates significantly reduced refusals (3/100) compared to the original Gemma model (99/100), with a low KL divergence of 0.33.
Good For
- Applications requiring unfiltered and direct responses.
- Use cases where deep, explicit reasoning is paramount.
- Scenarios demanding creative or sensitive content generation without built-in censorship.
- Developers seeking a compact model with high reasoning stability.