Novaciano/Heretic.Erudite-1B

Warm
Public
1B
BF16
32768
Feb 1, 2026
License: gemma
Hugging Face
Overview

Model Overview

Novaciano/Heretic.Erudite-1B is a 1 billion parameter language model developed by Novaciano. It was created using the SLERP merge method via mergekit, combining two distinct base models to achieve a unique set of characteristics.

Merge Details

This model is a blend of:

  • DavidAU/gemma-3-1b-it-heretic-extreme-uncensored-abliterated: Contributing to its less restrictive and "uncensored" output tendencies.
  • Stormtrooperaim/Erudite-V2-1b: Providing a foundation of general language understanding and generation capabilities.

The merge configuration utilized a weight of 0.6 for the 'Heretic' component, indicating a significant influence from its characteristics. The model operates with a 32768 token context length.

Key Characteristics

  • Merged Architecture: Combines the strengths of two 1B parameter models based on the Gemma 3.1B architecture.
  • Uncensored Tendencies: Inherits characteristics from the 'Heretic' base model, making it suitable for applications where content filtering is not desired or is managed externally.
  • General Purpose: Benefits from the 'Erudite-V2-1b' base for broad language tasks.

Use Cases

This model is particularly well-suited for:

  • Applications requiring less restrictive content generation.
  • Exploratory research into merged model behaviors.
  • Scenarios where a compact 1B parameter model with a long context window is beneficial.