liminerity/e.star.7.b
liminerity/e.star.7.b is a 7 billion parameter Mistral-based causal language model developed by gate369, fine-tuned from yam-peleg/Experiment26-7B. It was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. This model demonstrates strong general reasoning capabilities, with an average score of 68.28 on the Open LLM Leaderboard, making it suitable for a variety of general-purpose language understanding and generation tasks.
Loading preview...
Overview
liminerity/e.star.7.b is a 7 billion parameter language model developed by gate369, built upon the Mistral architecture. It was fine-tuned from the yam-peleg/Experiment26-7B model, leveraging Unsloth and Huggingface's TRL library for a 2x faster training process. This model is released under the Apache-2.0 license.
Key Capabilities
- General Reasoning: Achieves an average score of 68.28 on the Open LLM Leaderboard, indicating solid performance across various benchmarks.
- Efficient Training: Benefits from Unsloth's optimizations, allowing for quicker iteration and development cycles.
Open LLM Leaderboard Evaluation Results
The model's performance has been evaluated on the Hugging Face Open LLM Leaderboard, with detailed results available here. Key scores include:
- Avg.: 68.28
- AI2 Reasoning Challenge (25-Shot): 63.91
- HellaSwag (10-Shot): 86.02
- MMLU (5-Shot): 63.44
- TruthfulQA (0-shot): 54.91
- Winogrande (5-shot): 80.19
- GSM8k (5-shot): 61.18
Good For
- General-purpose text generation and understanding tasks.
- Applications requiring a 7B parameter model with a focus on balanced reasoning and language capabilities.
- Developers looking for a model trained with efficient methods like Unsloth.