coder3101/gemma-3-27b-it-heretic-v2
Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Nov 28, 2025License:gemmaArchitecture:Transformer0.0K Warm

coder3101/gemma-3-27b-it-heretic-v2 is a 27 billion parameter instruction-tuned multimodal language model, derived from Google's Gemma 3 family, with a 32K token context window. This version has been 'decensored' using the Heretic tool, significantly reducing refusal rates compared to the original Google model. It is designed for text generation and image understanding tasks, offering capabilities in question answering, summarization, and reasoning, particularly where less restrictive content filtering is desired.

Loading preview...

Model Overview

coder3101/gemma-3-27b-it-heretic-v2 is a 27 billion parameter instruction-tuned multimodal language model, a modified version of Google's Gemma 3-27b-it. This model has been 'decensored' using the Heretic tool, resulting in a substantial reduction in refusal rates (14/100 compared to the original model's 98/100). It maintains the core capabilities of the Gemma 3 family, which are built from the same research and technology as the Gemini models.

Key Capabilities

  • Multimodal Input: Handles both text and image inputs, with images normalized to 896x896 resolution and encoded to 256 tokens each.
  • Extensive Context: Features a large 32K token context window for the 27B variant, enabling processing of longer inputs.
  • Multilingual Support: Supports over 140 languages for text generation.
  • Reduced Refusals: Significantly less prone to refusing prompts compared to its base model, making it suitable for a wider range of applications.
  • General Purpose: Capable of various text generation and image understanding tasks, including question answering, summarization, and reasoning.

Should I use this for my use case?

This model is particularly suited for developers who require a powerful multimodal LLM with a large context window and a significantly reduced tendency to refuse prompts. If your application involves creative content generation, open-ended dialogue, or tasks where the original Gemma 3 model's content filtering was too restrictive, this 'heretic' version offers greater flexibility. It is ideal for scenarios demanding less constrained responses, while still leveraging the strong foundational capabilities of the Gemma 3 architecture in reasoning, STEM, and multilingual tasks.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p