LTC-AI-Labs/L2-7b-Hermes-Synthia
LTC-AI-Labs/L2-7b-Hermes-Synthia is a 7 billion parameter language model fine-tuned by LTC-AI-Labs on the Hermes2 7B model using the Synthia dataset. This model is specifically optimized for role-playing scenarios, demonstrating strong performance in conversational and interactive text generation. With a context length of 4096 tokens, it is primarily designed for applications requiring engaging and dynamic character interactions.
Loading preview...
Overview
LTC-AI-Labs/L2-7b-Hermes-Synthia is a 7 billion parameter language model developed by LTC-AI-Labs, built upon the Hermes2 7B architecture and fine-tuned with the Synthia dataset. This model is particularly noted for its strong performance in role-playing applications, as evaluated on platforms like LavernAI.
Key Capabilities
- Optimized for Role-Playing: Demonstrates enhanced ability in generating dynamic and engaging conversational text for role-playing scenarios.
- Based on Hermes2 7B: Leverages the foundational capabilities of the Hermes2 7B model.
Open LLM Leaderboard Evaluation
The model's performance has been evaluated on the Open LLM Leaderboard, with detailed results available here. Key metrics include:
- Avg.: 52.21
- HellaSwag (10-Shot): 79.12
- Winogrande (5-shot): 74.51
- AI2 Reasoning Challenge (25-Shot): 51.02
- MMLU (5-Shot): 47.88
- TruthfulQA (0-shot): 46.77
- GSM8k (5-shot): 13.95
Good for
- Applications requiring high-quality, engaging role-play interactions.
- Generative tasks where conversational depth and character consistency are crucial.