Weyaxi/ChatAYT-Lora-Assamble-Marcoroni
Weyaxi/ChatAYT-Lora-Assamble-Marcoroni is a 13 billion parameter language model developed by Weyaxi, featuring a 4096-token context length. This model demonstrates balanced performance across various benchmarks, including strong results in ARC and HellaSwag, making it suitable for general-purpose language understanding and generation tasks. Its design focuses on providing a capable foundation for diverse applications requiring robust reasoning and common sense.
Loading preview...
Model Overview
Weyaxi/ChatAYT-Lora-Assamble-Marcoroni is a 13 billion parameter language model with a 4096-token context window, developed by Weyaxi. This model is designed for general-purpose applications, offering a solid foundation for various natural language processing tasks. Its performance has been evaluated on the Open LLM Leaderboard, providing insights into its capabilities across different benchmarks.
Key Capabilities
- Reasoning: Achieves 62.46 on ARC (25-shot) and 77.35 on Winogrande (5-shot), indicating good common sense and reasoning abilities.
- Reading Comprehension: Scores 83.05 on HellaSwag (10-shot), demonstrating proficiency in understanding contextual information.
- Knowledge & Understanding: Records 58.72 on MMLU (5-shot) and 56.12 on TruthfulQA (0-shot), reflecting its general knowledge and ability to provide factual responses.
Good For
- General Language Tasks: Suitable for a wide range of applications including text generation, summarization, and question answering where a balanced performance across multiple domains is required.
- Research and Development: Provides a capable base model for further fine-tuning or experimentation in various NLP subfields.
- Applications requiring robust understanding: Its scores on ARC and HellaSwag suggest it can handle tasks that demand a good grasp of context and logical inference.