mlabonne/Zebrafish-7B
Zebrafish-7B by mlabonne is a 7 billion parameter language model with an 8192-token context length, created using the novel Model Stock merge method. This model is a merge of liminerity/M7-7b and rwitz/experiment26-truthy-iter-0, built upon the Mistral-7B-v0.1 base. It demonstrates strong general performance, scoring 62.41 on the Nous benchmark average, making it suitable for a wide range of general-purpose language generation tasks.
Loading preview...
Zebrafish-7B: A Model Stock Merge
Zebrafish-7B is a 7 billion parameter language model developed by mlabonne, notable for its use of the innovative Model Stock merge method. This model combines the strengths of liminerity/M7-7b and rwitz/experiment26-truthy-iter-0 using LazyMergekit, built on the mistralai/Mistral-7B-v0.1 base.
Key Capabilities & Performance
Zebrafish-7B exhibits robust general-purpose performance, as evidenced by its evaluation on the Nous benchmark. It achieves an average score of 62.41, with specific scores including:
- AGIEval: 44.92
- GPT4All: 77.18
- TruthfulQA: 78.25
- Bigbench: 49.28
These scores position Zebrafish-7B competitively among other 7B models, including mlabonne/AlphaMonarch-7B and mistralai/Mistral-7B-Instruct-v0.2.
When to Use Zebrafish-7B
- General Language Generation: Its balanced performance across various benchmarks makes it suitable for a broad spectrum of text generation tasks.
- Exploration of Merged Models: Developers interested in the Model Stock merging technique can use this model as a practical example.
- Applications requiring a 7B model: For scenarios where a 7B parameter model is optimal for resource constraints while still delivering strong results.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.