DavidAU/Gemma-3-27b-it-vl-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking

VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Feb 24, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

DavidAU/Gemma-3-27b-it-vl-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking is a 27 billion parameter instruction-tuned Gemma model, fine-tuned by DavidAU using the GLM 4.7 Flash reasoning dataset. This model is uncensored and excels at deep reasoning, creative tasks, and general use cases, outperforming its base model across seven benchmarks. It features a 32768 token context length and is optimized for detailed, compact reasoning and uncensored output, making it suitable for applications requiring nuanced and unrestricted text generation.

Loading preview...

DavidAU/Gemma-3-27b-it-vl-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking

This is a 27 billion parameter instruction-tuned Gemma model, developed by DavidAU, that has been fine-tuned for deep reasoning and uncensored output. Utilizing the GLM 4.7 Flash reasoning dataset, this model significantly enhances the base Gemma 27B's capabilities, exceeding its performance across all seven tested benchmarks.

Key Capabilities

  • Uncensored Generation: Designed to produce content exactly as requested, without refusal or 'nanny' filters, including explicit or graphic material when directed.
  • Deep Reasoning: Incorporates advanced reasoning capabilities, leading to compact yet highly detailed and precise outputs across various tasks.
  • Enhanced Image Processing: The reasoning enhancements also extend to improved image processing capabilities.
  • Broad Use Cases: Excels in both creative writing and general-purpose applications.
  • Temperature Stable Reasoning: Maintains consistent reasoning quality across a wide temperature range (.1 to 2.5).
  • Flexible Activation: Reasoning can be activated automatically, via an optional system prompt, or by using the "think deeply:" prefix in the prompt.

Good For

  • Applications requiring unrestricted and direct content generation.
  • Tasks demanding highly detailed and precise reasoning.
  • Creative writing, role-playing, and scenarios where nuanced, uncensored responses are critical.
  • Use cases benefiting from enhanced image processing through improved reasoning.
  • Developers seeking a powerful 27B model that offers superior benchmark performance compared to its base.