Weyaxi/OpenOrcaPlatypus2-Platypus2-13B-QLora-0.80-epoch
The Weyaxi/OpenOrcaPlatypus2-Platypus2-13B-QLora-0.80-epoch model is a 13 billion parameter language model, fine-tuned using QLoRA. It is based on the Platypus2 architecture and demonstrates a strong average performance of 49.71 on the Open LLM Leaderboard benchmarks. This model is suitable for general language understanding and generation tasks, particularly those requiring robust performance across various academic and reasoning evaluations.
Loading preview...
Model Overview
The Weyaxi/OpenOrcaPlatypus2-Platypus2-13B-QLora-0.80-epoch is a 13 billion parameter language model, fine-tuned with QLoRA, achieving an average score of 49.71 on the Open LLM Leaderboard. This model is derived from the Platypus2 architecture and has been evaluated across a range of benchmarks to assess its capabilities.
Key Performance Metrics
Based on the Open LLM Leaderboard evaluations, the model exhibits the following scores:
- Avg.: 49.71
- ARC (25-shot): 59.81
- HellaSwag (10-shot): 82.69
- MMLU (5-shot): 56.96
- TruthfulQA (0-shot): 52.92
- Winogrande (5-shot): 74.43
- GSM8K (5-shot): 2.35
- DROP (3-shot): 18.83
These metrics indicate a solid performance in common sense reasoning (ARC, HellaSwag, Winogrande) and general knowledge (MMLU, TruthfulQA), while showing lower scores in complex mathematical reasoning (GSM8K) and reading comprehension (DROP).
Use Cases
This model is well-suited for applications requiring:
- General text generation and understanding.
- Tasks benefiting from strong common sense and general knowledge.
- Scenarios where a 13B parameter model offers a balance between performance and computational efficiency.