Weyaxi/OrcaMini-Platypus2-13B-QLoRA-0.80-epoch
Weyaxi/OrcaMini-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model with a 4096-token context length, created by merging psmathur/orca_mini_v3_13b and Weyaxi/Platypus2-13B-QLoRA-0.80-epoch. This model demonstrates balanced performance across various benchmarks, including an average score of 54.08 on the Open LLM Leaderboard. It is suitable for general-purpose language understanding and generation tasks, particularly those requiring a blend of reasoning and factual recall.
Loading preview...
Model Overview
Weyaxi/OrcaMini-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model resulting from the merge of two distinct models: psmathur/orca_mini_v3_13b and Weyaxi/Platypus2-13B-QLoRA-0.80-epoch. This fusion aims to combine the strengths of its constituent models, offering a versatile solution for various natural language processing tasks.
Key Capabilities & Performance
Evaluated on the Hugging Face Open LLM Leaderboard, this model achieves an average score of 54.08. Specific benchmark results highlight its performance across different domains:
- ARC (25-shot): 60.84
- HellaSwag (10-shot): 82.56
- MMLU (5-shot): 56.42
- TruthfulQA (0-shot): 53.32
- Winogrande (5-shot): 75.93
- GSM8K (5-shot): 2.27
- DROP (3-shot): 47.24
These scores indicate a balanced capability in common sense reasoning, language understanding, and factual recall, with a notable performance in HellaSwag and Winogrande.
Ideal Use Cases
This model is well-suited for applications requiring a general-purpose language model with solid performance across a range of tasks. Its balanced benchmark results suggest it can be effectively used for:
- Text generation and summarization
- Question answering
- Reasoning tasks
- Content creation