Roman0/gemma-3-1b-it-heretic

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 13, 2025License:gemmaArchitecture:Transformer Warm

Roman0/gemma-3-1b-it-heretic is a 1 billion parameter instruction-tuned causal language model, based on Google's Gemma 3 architecture with a 32K token context window. This model has been decensored using the Heretic v1.1.0 tool, significantly reducing refusals compared to the original Gemma 3 1B-IT model. It is primarily designed for applications requiring less restrictive content generation and direct responses.

Loading preview...

Overview

Roman0/gemma-3-1b-it-heretic is a specialized version of Google's Gemma 3 1B-IT model, which is part of the Gemma family of lightweight, open models built from the same research as Gemini. This particular variant has been decensored using the Heretic v1.1.0 tool, resulting in a substantial reduction in refusal rates (4/100 compared to 99/100 for the original model).

Key Capabilities

  • Decensored Output: Significantly reduced content refusals compared to the base Gemma 3 1B-IT model, allowing for broader response generation.
  • Multimodal Foundation: While this specific model is text-only, its base Gemma 3 architecture supports text and image inputs, generating text outputs.
  • Efficient Deployment: Its 1 billion parameter size and 32K context window make it suitable for deployment in resource-limited environments like laptops or desktops.
  • Multilingual Support: The underlying Gemma 3 models were trained with multilingual data, supporting over 140 languages.

Good For

  • Unrestricted Content Generation: Use cases where the default safety filters of instruction-tuned models are too restrictive.
  • Creative Writing & Roleplay: Scenarios requiring more freedom in narrative and character interaction.
  • Direct Question Answering: Applications where direct, unfiltered answers are preferred over cautious or refusal-based responses.

Limitations

As a decensored model, users should be aware of the increased potential for generating harmful, biased, or inappropriate content. The model's responses are based on its training data and may still contain factual inaccuracies or reflect societal biases.