hereticness/Heretic-InfiR-1B-Instruct

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 29, 2025Architecture:Transformer Warm

Heretic-InfiR-1B-Instruct is a 1 billion parameter instruction-tuned model developed by hereticness. This model is characterized by its unique 'disobedience rate' of 18% and a KL divergence of 0.1469, suggesting a distinct behavioral profile compared to its original base. With a context length of 32768 tokens, it is designed for applications where a specific, non-standard response pattern or exploration of less conventional outputs is desired.

Loading preview...

Heretic-InfiR-1B-Instruct Overview

Heretic-InfiR-1B-Instruct is a 1 billion parameter instruction-tuned model from hereticness, notable for its distinct behavioral characteristics. Unlike typical instruction-following models, this variant exhibits an 18% "disobedience rate" and a KL divergence of 0.1469 from its original base, indicating a deliberate deviation in its response patterns.

Key Characteristics

  • Parameter Count: 1 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Unique Behavioral Profile: Features an 18% "disobedience rate" and a KL divergence of 0.1469, suggesting it may not always adhere strictly to instructions, offering a different kind of interaction.
  • Internal Weight Parameters: Specific internal weight parameters are detailed, such as attn.o_proj.max_weight = 1.05 and mlp.down_proj.min_weight = 0.88, which likely contribute to its unique output characteristics.

Potential Use Cases

  • Exploratory AI: Suitable for scenarios requiring non-standard or creative responses that diverge from typical instruction following.
  • Behavioral Research: Could be used in research to study model behavior under varying degrees of instruction adherence.
  • Niche Applications: Ideal for applications where a degree of unpredictability or a specific "heretical" output style is desired, rather than strict obedience.