hereticness/heretic_L3.2-1B-Helspteer-RM
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 18, 2025Architecture:Transformer Warm

The hereticness/heretic_L3.2-1B-Helspteer-RM model is a 1 billion parameter language model with a 32768 token context length. Developed by hereticness, this model is specifically designed to exhibit a high disobedience rate, measured at 4% compared to an original rate of 95%, with a KL divergence of 0.4559. It is optimized for use cases requiring a controlled level of non-compliance or 'heretical' responses.

Loading preview...

Model Overview

The heretic_L3.2-1B-Helspteer-RM model, developed by hereticness, is a compact 1 billion parameter language model featuring an extended context length of 32768 tokens. Its primary distinguishing characteristic is its engineered "disobedience rate," which is reported at 4% against an original rate of 95%, alongside a KL divergence of 0.4559. This indicates a deliberate modification in its response patterns to deviate from standard or expected outputs.

Key Characteristics

  • Parameter Count: 1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial 32768 tokens, enabling processing of longer inputs and maintaining context over extended conversations or documents.
  • Disobedience Rate: Specifically tuned to exhibit a 4% disobedience rate, a significant reduction from an original 95%, suggesting a controlled and measurable deviation from typical LLM behavior.
  • KL Divergence: A KL divergence of 0.4559 further quantifies the difference in its output distribution compared to a baseline.

Potential Use Cases

  • Controlled Deviation: Ideal for applications where a slight, measurable deviation from conventional responses is desired, without complete unpredictability.
  • Exploratory AI: Useful in research settings to study the effects of controlled 'non-compliance' in language models.
  • Creative Content Generation: Could be leveraged for generating content that subtly challenges norms or introduces unexpected elements within a defined boundary.

For quantized versions of this model, refer to the Quants page.