DS-Archive/ds-smol-brew-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:agpl-3.0Architecture:Transformer0.0K Open Weights Cold

DS-Archive/ds-smol-brew-7b is a 7 billion parameter Llama 2-based model, created by merging several models including Spicyboros, StableBeluga, Nous-Hermes, Limarp, and PygmalionAI using SLERP. This model is specifically optimized for roleplaying chat scenarios, leveraging its diverse merged components to generate character-driven responses. It is designed for interactive narrative experiences rather than factual information retrieval.

Loading preview...

Overview

DS-Archive/ds-smol-brew-7b is a 7 billion parameter language model built upon the Llama 2 architecture. It is a composite model, created through a SLERP (Spherical Linear Interpolation) merge of five distinct models: jondurbin/spicyboros-7b-2.2, stabilityai/StableBeluga-7B, NousResearch/Nous-Hermes-llama-2-7b, lemonilia/limarp-llama2-v2, and PygmalionAI/pygmalion-2-7b. This merging strategy aims to combine the strengths of its constituent models.

Key Capabilities

  • Roleplaying Chat: The model is specifically designed and optimized for engaging in character-driven roleplaying conversations.
  • Diverse Persona Generation: By integrating models known for their conversational and creative text generation, it can adopt and maintain various character personas.

Usage and Limitations

While various prompt formats may work, the model is noted to respond well to the Alpaca instruction format, particularly the LIMARP v2 style, which includes explicit sections for character persona, user persona, scenario, and user input. It is important to note that due to its origins and training, the model may exhibit biases similar to those found in niche online roleplaying communities. It is not intended for providing factual information or advice of any kind. Users should refer to the individual merged models' repositories for detailed training information.