Weyaxi/OpenOrca-Platypus2-13B-QLoRA-0.80-epoch
Weyaxi/OpenOrca-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model, a merge of Open-Orca/OpenOrcaxOpenChat-Preview2-13B and Platypus2-13B-QLoRA-0.80-epoch. This model demonstrates an average performance of 64.24 on the Open LLM Leaderboard, with notable scores in HellaSwag (82.99) and ARC (62.37). It is designed for general language understanding and generation tasks, leveraging its merged architecture for balanced capabilities. The model operates with a context length of 4096 tokens, suitable for a range of conversational and analytical applications.
Loading preview...
Model Overview
Weyaxi/OpenOrca-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model resulting from a merge of two distinct models: Open-Orca/OpenOrcaxOpenChat-Preview2-13B and Platypus2-13B-QLoRA-0.80-epoch. This merging strategy aims to combine the strengths of both base models, offering a versatile tool for various natural language processing tasks.
Key Capabilities & Performance
This model's performance is evaluated on the Open LLM Leaderboard, showcasing a balanced capability across several benchmarks:
- Average Score: 64.24
- ARC (25-shot): 62.37
- HellaSwag (10-shot): 82.99
- MMLU (5-shot): 59.38
- TruthfulQA (0-shot): 52.20
These scores indicate a solid foundation for tasks requiring common sense reasoning (ARC), natural language inference (HellaSwag), and general knowledge (MMLU). The model operates with a context length of 4096 tokens.
Potential Use Cases
Given its balanced performance across various benchmarks, this model is suitable for:
- General-purpose text generation: Creating coherent and contextually relevant text.
- Question answering: Responding to queries based on provided context or general knowledge.
- Conversational AI: Developing chatbots or interactive agents that require understanding and generating human-like dialogue.
- Text summarization: Condensing longer texts into shorter, informative summaries.