Naphula/StationV-24B-v1
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Dec 16, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Naphula/StationV-24B-v1 is a 24 billion parameter merged language model, a modified expansion of the Circuitry/Rotor series, built upon TheDrummer/Precog-24B-v1 as its base model. This model is a Goetia checkpoint test, combining multiple specialized 24B models using a ties merge method. It is designed for general text generation, though it may exhibit weak prompt adherence due to its experimental merge composition.
Loading preview...
StationV-24B-v1 Overview
StationV-24B-v1 is an experimental 24 billion parameter language model developed by Naphula, serving as a Goetia checkpoint test. It is a modified expansion of the Circuitry/Rotor series, utilizing TheDrummer/Precog-24B-v1 as its foundational base model.
Key Characteristics
- Merge Architecture: This model is a product of a "ties" merge method, combining fourteen distinct 24B models, each contributing with specific density and weight parameters. Notable merged components include
Delta-Vector/Rei-24B-KTO,TheDrummer/Magidonia-24B-v4.2.0,zerofata/MS3.2-PaintedFantasy-v3-24B, andNaphula/BeaverAI_Fallen-Mistral-Small-3.1-24B-v1e_textonly. - Context Length: It supports a context length of 32768 tokens.
- Experimental Nature: The model is explicitly noted as being "partially broken" due to the use of 2501 and 2503 finetunes, which may lead to weak prompt adherence.
Potential Use Cases
- Research and Experimentation: Ideal for researchers and developers interested in exploring the effects of complex model merging techniques and their impact on language generation.
- General Text Generation: Despite its noted limitations, it can be used for various text generation tasks where some prompt adherence variability is acceptable.
Limitations
- Prompt Adherence: Users should be aware of the caution regarding "weak prompt adherence" due to its experimental merge composition, which might require more robust prompting strategies.