coder3101/gemma-3-1b-it-heretic
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Nov 23, 2025License:gemmaArchitecture:Transformer Warm

The coder3101/gemma-3-1b-it-heretic is a 1 billion parameter instruction-tuned causal language model, derived from Google's Gemma 3 family, with a 32K token context window. This version has been decensored using the Heretic v1.0.1 tool, significantly reducing refusals compared to the original model. It is a multimodal model capable of handling text and image inputs to generate text outputs, excelling in tasks like question answering, summarization, and reasoning, particularly where reduced content moderation is desired.

Loading preview...

Overview

This model, coder3101/gemma-3-1b-it-heretic, is a 1 billion parameter instruction-tuned variant of Google's Gemma 3 family, featuring a 32K token context window. It is a multimodal model, processing both text and image inputs (normalized to 896x896 resolution, encoded to 256 tokens each) to generate text outputs. A key differentiator is its decensored nature, achieved using the Heretic v1.0.1 tool, which drastically reduces refusals (1/100 compared to 99/100 in the original google/gemma-3-1b-it model).

Key Capabilities

  • Multimodal Processing: Handles text and image inputs for diverse tasks.
  • Text Generation: Capable of generating creative text formats, chatbot responses, and summaries.
  • Image Data Extraction: Interprets and summarizes visual data.
  • Reduced Refusals: Significantly less prone to refusing prompts compared to its base model.
  • Multilingual Support: Trained on data including over 140 languages.

Use Cases

This model is well-suited for applications requiring a less restrictive content policy, such as:

  • Content Creation: Generating diverse text and creative writing without frequent content moderation interruptions.
  • Conversational AI: Powering chatbots and virtual assistants where a broader range of responses is acceptable.
  • Research and Development: Experimenting with VLM and NLP techniques, especially in scenarios where the base model's safety filters are too restrictive.
  • Resource-Limited Environments: Its relatively small size (1B parameters) allows for deployment on devices like laptops and desktops.