DavidAU/Gemma-3-27b-it-vl-SuperBrain7x-High-Reasoning-ULTRAMIND-Heretic-Uncensored

Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Feb 13, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

DavidAU's Gemma-3-27b-it-vl-SuperBrain7x-High-Reasoning-ULTRAMIND-Heretic-Uncensored is a 27 billion parameter Gemma 3 instruction-tuned model, fine-tuned with Unsloth on the Superbrain 7x high reasoning dataset. It features uncensored output, intact image processing, and enhanced reasoning capabilities, making it suitable for applications requiring deep thinking and unrestricted content generation across a wide temperature range.

Loading preview...

Model Overview

DavidAU's Gemma-3-27b-it-vl-SuperBrain7x-High-Reasoning-ULTRAMIND-Heretic-Uncensored is a 27 billion parameter Gemma 3 instruction-tuned model, fine-tuned using Unsloth. Its primary focus is on deep thinking and uncensored content generation, leveraging the Superbrain 7x high reasoning dataset. The model maintains fully functional image processing capabilities.

Key Capabilities & Features

  • Enhanced Reasoning: Significantly affects both image "intelligence" and output generation. Reasoning is stable across a wide temperature range (0.1 to 2.5).
  • Uncensored Output: Designed to generate content without refusals, including potentially sensitive or explicit material. Users may need to provide specific directives (e.g., "use slang") for desired graphic levels.
  • High Context Length: Supports a 128k context window.
  • Flexible Activation: Reasoning can be activated via "think deeply: prompt" or through specific system prompts. A dedicated "chat-template-thinking.jinja" is available for always-on thinking.

Performance & Benchmarks

The model demonstrates strong performance, particularly in reasoning tasks, with benchmarks showing improvements over its uncensored base model. Notably, its KL divergence is 0.07, indicating minimal damage during the de-censoring process, and it exhibits significantly reduced refusals (9/100 compared to 98/100 for the original).

Optimal Usage

For smoother operation and optimal quality, users are advised to set a Smoothing_factor of 1.5 in interfaces like KoboldCpp, oobabooga/text-generation-webui, or Silly Tavern. Increasing the repetition penalty to 1.1-1.15 is an alternative if smoothing is not available.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p