chargoddard/loyal-piano-m7
chargoddard/loyal-piano-m7 is a 7 billion parameter language model developed by chargoddard, featuring a 4096-token context length. It was trained with an experimental dataset ratio, aiming for strong roleplay capabilities, general intelligence, and long-context recall. As of November 2023, it ranked as the #4 7B model on a public leaderboard, demonstrating competitive performance across various benchmarks.
Loading preview...
Overview
chargoddard/loyal-piano-m7 is a 7 billion parameter language model developed by chargoddard, built with Axolotl. This model was created as an experiment in dataset ratios, with an initial goal of excelling in roleplay scenarios, general intelligence, and maintaining strong long-context recall. While its roleplay capabilities are still being evaluated, the model has shown promising results in other areas.
Key Capabilities
- Competitive Performance: As of November 30, 2023, loyal-piano-m7 ranked as the #4 7B model on a public leaderboard, indicating strong general performance.
- Balanced Benchmarking: Achieves solid scores across a range of benchmarks including ARC (66.72), HellaSwag (85.03), MMLU (64.43), TruthfulQA (60.03), Winogrande (79.08), and GSM8K (25.7).
- Experimental Training: Trained using a unique dataset composition including PIPPA (43%), summarize_from_feedback (26%), orca_mini_v1_dataset (17%), rpguild (8%), and LimaRP (6%).
Good for
- General-purpose text generation: Its strong benchmark performance suggests suitability for a variety of tasks.
- Research and experimentation: Ideal for developers interested in exploring the impact of diverse dataset compositions on model performance and specific capabilities like roleplay and long-context understanding.