hereticness/Heretic-Gemma-3-1B-Instruct-TrashMix-v1.1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Jan 6, 2026Architecture:Transformer0.0K Warm

Heretic-Gemma-3-1B-Instruct-TrashMix-v1.1 is a 3.1 billion parameter instruction-tuned language model developed by hereticness, based on the xzuyn/Gemma-3-1B-Instruct-TrashMix-v1.1 architecture. This model is specifically fine-tuned to significantly reduce refusals, achieving a refusal rate of 4/100 compared to the original model's 94/100. It is optimized for use cases requiring less restrictive content generation and direct responses.

Loading preview...

Model Overview

Heretic-Gemma-3-1B-Instruct-TrashMix-v1.1 is an instruction-tuned language model derived from xzuyn/Gemma-3-1B-Instruct-TrashMix-v1.1. The primary focus of this iteration by hereticness is to drastically reduce the model's tendency to refuse prompts or generate overly cautious responses. This is evidenced by a reported refusal rate of just 4 out of 100 prompts, a significant improvement over the original model's 94/100 refusal rate.

Key Differentiators

  • Reduced Refusals: The most notable feature is its exceptionally low refusal rate, making it suitable for applications where direct and uninhibited responses are preferred.
  • Gemma-3.1B Base: Built upon the Gemma 3.1B Instruct architecture, providing a solid foundation for instruction following.
  • KL Divergence: A KL divergence of 0.2299 indicates a controlled modification from its base, focusing on behavioral changes rather than a complete architectural overhaul.

Ideal Use Cases

  • Creative Content Generation: Excellent for scenarios requiring imaginative or unrestricted text generation.
  • Direct Question Answering: Suitable for applications where users expect straightforward answers without excessive filtering.
  • Exploratory Prototyping: Useful for developers testing boundaries of language models or requiring less constrained outputs.