NovaCorp/Shadowncensored-1B
NovaCorp/Shadowncensored-1B is a 1 billion parameter language model created by NovaCorp, merged from two Gemma-3-1B-IT models using the SLERP method. This model is designed to explore merged latent spaces, leveraging its 32768 token context length for experimental language generation. It focuses on combining distinct model characteristics to produce unique textual outputs.
Loading preview...
Shadowncensored-1B: A Merged Language Model
Shadowncensored-1B is a 1 billion parameter language model developed by NovaCorp, resulting from a strategic merge of pre-trained models. This model was constructed using the mergekit tool, specifically employing the SLERP (Spherical Linear Interpolation) merge method.
Key Merge Details
This model integrates two distinct Gemma-3-1B-IT base models:
lunahr/gemma-3-1b-it-abliterated: Designated as the anchor model, contributing experimental neural imprints.Echo9Zulu/Shadows-Gemma-3-1B: Providing a baseline cognitive template.
The merge configuration utilized a t parameter of 0.45, with rescale enabled at a factor of 1.12, and was processed in bfloat16 precision for memory efficiency. The model maintains a substantial context length of 32768 tokens.
Intended Use
Shadowncensored-1B is particularly suited for:
- Experimental language generation: Exploring the emergent properties from merged model latent spaces.
- Comparative analysis: Studying how different base models contribute to a merged output.
- Research into model merging techniques: Providing a practical example of SLERP application.