Badgids/Gonzo-Chat-7B
Badgids/Gonzo-Chat-7B is a 7 billion parameter merged LLM, based on Mistral v0.01, featuring an extended 8192 token context length. This model is specifically designed for conversational AI, roleplay, agentic workflows, and light programming tasks. It distinguishes itself through its unique merge of Mistral-7B-Instruct-v0.2-code-ft, Nous-Hermes-2-Mistral-7B-DPO, and dolphin-2.6-mistral-7b-dpo-laser, resulting in a robust and versatile chat model.
Loading preview...
Gonzo-Chat-7B Overview
Gonzo-Chat-7B is a 7 billion parameter language model developed by Badgids, built upon the Mistral v0.01 architecture. It features an 8192 token context length, making it suitable for handling longer conversations and more complex prompts. This model is a result of a DARE TIES merge, combining several specialized Mistral-based models to enhance its capabilities.
Key Capabilities
- Conversational AI: Excels in general chat and interactive dialogue.
- Roleplay: Designed to perform well in roleplaying scenarios.
- Agentic Workflows: Supports integration into agent-based systems.
- Light Programming: Capable of assisting with basic coding tasks.
- Robust Performance: Achieves an average score of 66.63 on the Open LLM Leaderboard, with notable scores in HellaSwag (85.40) and Winogrande (77.74).
Merge Details
Gonzo-Chat-7B was created using the mergekit tool with the DARE TIES method. The merge incorporated:
- Nondzu/Mistral-7B-Instruct-v0.2-code-ft for coding proficiency.
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO for enhanced instruction following and dialogue.
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser contributing to its overall chat and reasoning abilities.
Use Cases
This model is particularly well-suited for applications requiring a versatile 7B model that can engage in dynamic conversations, simulate characters, or assist developers with minor coding challenges. Its merged architecture aims to provide a balanced performance across various interactive AI tasks.