Weyaxi/2x-LoRA-Assemble-Nova-13B
The Weyaxi/2x-LoRA-Assemble-Nova-13B model is a 13 billion parameter language model developed by PulsarAI, achieving an average score of 50.34 on the Open LLM Leaderboard. It demonstrates strong performance in tasks like HellaSwag (83.24) and ARC (62.63). This model is suitable for general language understanding and generation tasks, particularly where a balance of performance and efficiency is desired.
Loading preview...
Model Overview
The Weyaxi/2x-LoRA-Assemble-Nova-13B is a 13 billion parameter language model developed by PulsarAI. It has been evaluated on the Open LLM Leaderboard, achieving an average score of 50.34 across various benchmarks. This model is designed for general-purpose language tasks, offering a solid foundation for applications requiring robust language understanding and generation capabilities.
Key Performance Metrics
Based on the Open LLM Leaderboard evaluation, the model demonstrates notable performance in several areas:
- HellaSwag (10-shot): 83.24
- ARC (25-shot): 62.63
- MMLU (5-shot): 58.64
- Winogrande (5-shot): 76.95
While excelling in common sense reasoning and reading comprehension, its scores for mathematical reasoning (GSM8K: 10.24) and complex factual recall (DROP: 8.8) indicate areas where it may not be the optimal choice. The model's TruthfulQA score of 51.88 suggests a moderate ability to generate factually correct responses.
Use Cases
This model is well-suited for applications requiring:
- General text generation and completion.
- Common sense reasoning tasks.
- Reading comprehension and question answering where complex numerical or highly factual accuracy is not the primary concern.
- Prototyping and development where a 13B parameter model offers a good balance of performance and computational requirements.