FelixChao/NinjaDolphin-7B
FelixChao/NinjaDolphin-7B is a 7 billion parameter language model created by FelixChao, formed by merging CodeNinja-1.0-OpenChat-7B and MistralHermes-CodePro-7B-v1 with WizardDolphin-7B as the base. This model is specifically designed to enhance coding abilities, achieving a HumanEval score of 52.439%. It is suitable for code generation and programming-related tasks, offering improved performance over its base model.
Loading preview...
Overview
FelixChao/NinjaDolphin-7B is a 7 billion parameter language model developed by FelixChao, created through a merge of several specialized models. It combines beowolx/CodeNinja-1.0-OpenChat-7B and beowolx/MistralHermes-CodePro-7B-v1 with FelixChao/WizardDolphin-7B as its base model, utilizing the dare_ties merge method.
Key Capabilities
- Enhanced Coding Performance: The model is specifically engineered to improve coding abilities, building upon the foundation of WizardDolphin-7B.
- HumanEval Benchmark: Achieves a score of 52.439% on the HumanEval Python benchmark (uninstructed and without post-processing), indicating strong code generation capabilities.
- General Reasoning: Demonstrates solid performance across various benchmarks, including an average score of 69.74 on the Open LLM Leaderboard, with notable results in AI2 Reasoning Challenge (65.61) and HellaSwag (85.35).
Good For
- Code Generation: Ideal for tasks requiring the generation of Python code.
- Programming Assistance: Can be used in applications where improved coding ability is a primary requirement.
- Research and Development: Suitable for researchers exploring model merging techniques and their impact on specific capabilities like coding.