Weyaxi/Dolphin-Nebula-7B
Dolphin-Nebula-7B is a 7 billion parameter language model developed by Weyaxi, created by merging ehartford/dolphin-2.0-mistral-7b and PulsarAI/Nebula-7B-Lora. This model leverages a 8192 token context length, making it suitable for tasks requiring moderate context understanding. Its merged architecture suggests a focus on combining the strengths of its constituent models for general language generation and comprehension.
Loading preview...
Dolphin-Nebula-7B Overview
Dolphin-Nebula-7B is a 7 billion parameter language model developed by Weyaxi, resulting from a merge of two distinct models: ehartford/dolphin-2.0-mistral-7b and PulsarAI/Nebula-7B-Lora. This merging strategy aims to combine the capabilities and characteristics of its base models into a single, more versatile entity.
Key Characteristics
- Merged Architecture: Built upon the foundation of
dolphin-2.0-mistral-7bandNebula-7B-Lora, suggesting a blend of their respective strengths. - Parameter Count: Features 7 billion parameters, placing it in the medium-sized category for efficient deployment and inference.
- Context Length: Supports an 8192-token context window, enabling it to process and generate longer sequences of text.
Performance
While specific benchmark scores are not provided in the model card, its presence on the Open LLM Leaderboard indicates its performance is evaluated across standard metrics such as ARC, HellaSwag, MMLU, and TruthfulQA. Users interested in detailed performance metrics should consult the leaderboard directly.
Potential Use Cases
Given its merged nature and moderate size, Dolphin-Nebula-7B is likely suitable for a range of general-purpose language tasks, including text generation, summarization, and conversational AI where a balance between performance and resource efficiency is desired.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.