p-e-w/gemma-3-12b-it-heretic is a 12 billion parameter instruction-tuned multimodal language model, based on Google's Gemma 3 architecture, with a 32768 token context window. This version has been decensored using the Heretic v1.0.0 tool, significantly reducing refusals compared to the original model. It is designed for text generation and image understanding tasks, including question answering, summarization, and reasoning, with a focus on open-ended responses.
Loading preview...
Overview
This model, p-e-w/gemma-3-12b-it-heretic, is a 12 billion parameter instruction-tuned variant of Google's Gemma 3 model. It has been specifically modified using the Heretic v1.0.0 tool to be a "decensored" version of the original google/gemma-3-12b-it. The primary differentiator is its significantly reduced refusal rate, dropping from 97/100 in the original to 3/100 in this Heretic version, as measured by KL divergence.
Key Capabilities
- Multimodal: Handles both text and image inputs, generating text outputs.
- Extended Context: Features a 32K token context window for the 12B model size.
- Multilingual Support: Supports over 140 languages.
- Decensored Responses: Engineered to provide more open-ended and less restrictive outputs compared to its base model.
- Diverse Task Performance: Suitable for text generation, image analysis, question answering, summarization, and reasoning tasks.
Good for
- Applications requiring less restrictive or "decensored" AI responses.
- Text generation and image understanding tasks where the base Gemma 3 capabilities are desired without the original model's refusal tendencies.
- Deployment in environments with limited resources due to its relatively efficient size for its capabilities.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.