Naphula/WBCR-SLERP-24B-v1

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Cold

Naphula/WBCR-SLERP-24B-v1 is a 24 billion parameter language model based on the MistralForCausalLM architecture, featuring a 32768 token context length. This model is a multi-stage SLERP merge of several base models, including WeirdCompound, BereavedCompound, Circuitry, and Rotor. It is designed as a composite model, combining characteristics from its constituent parts through a specific merging methodology. Its primary utility lies in applications benefiting from a merged architecture derived from diverse base models.

Loading preview...

Model Overview

Naphula/WBCR-SLERP-24B-v1, also known as Weird Bereaved Circuitry Rotor v1, is a 24 billion parameter language model built upon the MistralForCausalLM architecture. It distinguishes itself through a multi-stage Spherical Linear Interpolation (SLERP) merging process, combining several distinct base models to create a composite architecture. This model supports a substantial context length of 32768 tokens.

Merging Methodology

The model's unique characteristic is its three-stage SLERP merging pipeline:

  • Stage 1: Combines WeirdCompound-v1.7-24b and BereavedCompound-v1.0-24b from FlareRebellion.
  • Stage 2: Merges Circuitry_24B_V.3 and Rotor_24B_V.1 from OddTheGreat.
  • Stage 3: Integrates the outputs of Stage 1 and Stage 2 to form the final WBCR-SLERP-24B-v1 model.

Each stage utilizes a t parameter of 0.5 for the SLERP merge, indicating an equal weighting between the source models. The tokenizer source is set to union, and it uses an auto chat template.

Potential Use Cases

This model is suitable for developers seeking a large language model that integrates the strengths of multiple specialized base models through a deliberate merging strategy. Its composite nature may offer a unique blend of capabilities derived from its constituent parts, making it a candidate for general-purpose text generation and understanding tasks where a broad range of learned features is beneficial.