coder3101/gemma-3-27b-it-heretic
coder3101/gemma-3-27b-it-heretic is a 27 billion parameter instruction-tuned multimodal language model, derived from Google DeepMind's Gemma 3 family, with a 32768 token context window. This specific version is a decensored variant of google/gemma-3-27b-it, created using the Heretic tool. It is designed for text generation and image understanding tasks, offering reduced refusal rates compared to the original model, making it suitable for use cases requiring less content moderation.
Loading preview...
Overview
coder3101/gemma-3-27b-it-heretic is a 27 billion parameter instruction-tuned multimodal language model, based on Google DeepMind's Gemma 3 architecture. It features a 32768 token context window and supports both text and image inputs, generating text outputs. This model is a decensored version of the original google/gemma-3-27b-it, specifically modified using the Heretic v1.0.1 tool to exhibit significantly lower refusal rates.
Key Differentiators
- Decensored Variant: Achieves a refusal rate of 9/100 compared to the original model's 98/100, making it less prone to content refusal.
- Multimodal Capabilities: Handles text and image inputs, generating text outputs, suitable for tasks like question answering, summarization, and reasoning involving visual data.
- Extensive Context Window: Supports a large 128K token context for the base Gemma 3 model, with this variant inheriting the 32768 token context for the 27B size.
- Multilingual Support: The underlying Gemma 3 models support over 140 languages.
Use Cases
- Applications requiring less restrictive content moderation.
- Content creation and communication, including text generation, chatbots, and summarization.
- Image data extraction and analysis.
- Research and education in NLP and VLM techniques.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.