Weyaxi/OpenOrca-Nebula-7B
OpenOrca-Nebula-7B is a 7 billion parameter language model developed by Weyaxi, created by merging Open-Orca/Mistral-7B-OpenOrca and PulsarAI/Nebula-7B. This model leverages the strengths of its constituent models to offer enhanced general-purpose language understanding and generation capabilities. It is designed for a broad range of applications requiring robust text processing and conversational AI.
Loading preview...
Overview
OpenOrca-Nebula-7B is a 7 billion parameter language model, a result of merging two distinct models: Open-Orca/Mistral-7B-OpenOrca and PulsarAI/Nebula-7B. This strategic merge aims to combine the respective strengths of its base models, providing a versatile and capable foundation for various natural language processing tasks.
Key Characteristics
- Architecture: Based on the Mistral 7B architecture, enhanced through a merge with Nebula-7B.
- Parameter Count: 7 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a context window of 4096 tokens.
Performance
While specific benchmark scores are not detailed in the provided README, the model is listed on the Open LLM Leaderboard, indicating its participation in standardized evaluations across metrics like ARC, HellaSwag, MMLU, and TruthfulQA. Users are encouraged to consult the leaderboard for up-to-date performance metrics.
Use Cases
This model is suitable for general-purpose language tasks, including but not limited to:
- Text generation
- Question answering
- Summarization
- Conversational AI