Novaciano/Heretic.Erudite_v2-1B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 1, 2026License:gemmaArchitecture:Transformer Warm

Novaciano/Heretic.Erudite_v2-1B is a 1 billion parameter language model created by Novaciano, merged using the SLERP method from Stormtrooperaim/Erudite-V2-1b and DavidAU/gemma-3-1b-it-heretic-extreme-uncensored-abliterated. This model leverages a 32768 token context length, combining characteristics from its base models. It is designed for applications requiring a compact yet capable model derived from specific merged functionalities.

Loading preview...

Model Overview

Novaciano/Heretic.Erudite_v2-1B is a 1 billion parameter language model developed by Novaciano, created through a merge of existing pre-trained models. This model utilizes the SLERP (Spherical Linear Interpolation) merge method, which combines the weights of different models to create a new one with blended characteristics. The merge process involved two primary base models: Stormtrooperaim/Erudite-V2-1b and DavidAU/gemma-3-1b-it-heretic-extreme-uncensored-abliterated.

Merge Details

The model was configured with a specific weighting, giving a 0.65 weight to DavidAU/gemma-3-1b-it-heretic-extreme-uncensored-abliterated relative to Stormtrooperaim/Erudite-V2-1b. The merging process was performed using mergekit, a tool for combining language models. The configuration specified bfloat16 for both the internal dtype and out_dtype, ensuring consistent precision throughout the merge.

Key Characteristics

  • Merged Architecture: Combines features from two distinct base models, potentially inheriting their strengths.
  • SLERP Method: Utilizes a sophisticated interpolation technique for weight merging.
  • Compact Size: At 1 billion parameters, it offers a balance between performance and computational efficiency.
  • Extended Context: Supports a context length of 32768 tokens, suitable for processing longer inputs.

Potential Use Cases

This model is suitable for developers looking for a compact, merged model that integrates specific characteristics from its constituent parts. It could be applied in scenarios where a blend of the base models' capabilities is desired, particularly within its 1 billion parameter constraint and 32768 token context window.