Naberius-7B: A Pliant, Logic-Based, & Imaginative 7B Instruct Model
Naberius-7B, developed by CalderaAI, is a 7 billion parameter Mistral-class model created through a spherical linear interpolation (SLERP) merge of three high-performance models: zephyr-7b-sft-beta, OpenHermes-2-Mistral-7B, and dolphin-2.2.1-mistral-7b. This advanced merging technique, detailed in the Project Git, ensures a more coherent and smooth integration of behaviors compared to standard linear interpolation.
Key Capabilities
- Exceptional Roleplay: Designed to deliver coherent and imaginative roleplay experiences, often performing at a level comparable to larger models.
- Strong Instruction Following: Excels at understanding and executing complex instructions, even for a lightweight 7B model.
- Nuance and Adaptability: Demonstrates signs of spatial awareness and adapts well to conversational nuance.
- Uncensored & Pliant: Prioritizes logic, imagination, and accommodating user requests over built-in censorship, avoiding railroading or gaslighting.
- Efficient Inference: Offers incredible inference speed due to its lightweight 7B architecture.
Good For
- Creative Text Adventures & Roleplay: Ideal for applications requiring dynamic, imaginative, and boundary-free narrative generation.
- Personal Research & Entertainment: Suitable for personal use cases where a highly pliable and logic-oriented AI is desired.
- Developers Seeking Unbiased Models: Appeals to users looking for models less prone to "intentionally-baked-in bias" regarding positive-only outputs.
Naberius-7B aims to provide a powerful yet lightweight solution for users prioritizing imaginative freedom and precise instruction adherence in their AI interactions.