BeagleLake-7B: A Merged 7B Language Model
BeagleLake-7B is a 7 billion parameter language model developed by fhai50032, created through a merge of two distinct models: mlabonne/NeuralBeagle14-7B and fhai50032/RolePlayLake-7B. This merge was performed using the DARE TIES method, with mlabonne/NeuralBeagle14-7B serving as the base model.
Key Capabilities & Characteristics
- Hybrid Performance: Aims to combine the robust general performance of NeuralBeagle14-7B with the specialized role-playing (RP) capabilities and uncensored nature of RolePlayLake-7B.
- Merge Method: Utilizes the
dare_ties merge method, configured with specific weights and densities for the constituent models, suggesting an optimized blend of their features. - Leaderboard Performance: Achieves an average score of 72.34 on the Open LLM Leaderboard, with notable scores including 87.38 on HellaSwag and 83.19 on Winogrande.
- Base for Fine-tuning: Positioned as a strong base model for subsequent fine-tuning, leveraging the combined strengths of its components.
Use Cases
- Versatile Text Generation: Suitable for a range of text generation tasks, benefiting from the general capabilities of NeuralBeagle14-7B.
- Role-Playing Scenarios: Particularly well-suited for role-playing applications due to the inclusion of RolePlayLake-7B, which is noted for its RP suitability and uncensored responses.
- Further Customization: An excellent candidate for developers looking for a pre-merged base model to fine-tune for specific applications, potentially reducing the effort of merging from scratch.