SynthIQ-7b: A Merged 7B Language Model
SynthIQ-7b is a 7 billion parameter language model developed by sethuiyer, created by merging several powerful models using mergekit. It is built upon the mistralai/Mistral-7B-v0.1 base model, incorporating layers from Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp and uukuguy/speechless-mistral-six-in-one-7b.
Key Capabilities & Performance
This model is notable for its strong performance across a range of benchmarks, achieving an average score of 69.37 on the Open LLM Leaderboard. It has been rated 92.23/100 by GPT-4 for its ability to handle varied complex prompts. Specific benchmark scores include:
- HellaSwag: 85.82
- Winogrande: 78.69
- MMLU: 64.75
- GSM8K: 64.06
Unique Features & Usage
SynthIQ-7b is designed for general-purpose applications, excelling in reasoning and conversational tasks. It has been tested and confirmed to work effectively with agentic frameworks like autogen and CrewAI, making it suitable for multi-agent systems. The model is available in GGUF format for efficient local deployment, with Q4_K_M recommended for a balance of quality and performance. It is also accessible via Ollama, simplifying deployment with the command ollama run stuehieyr/synthiq.
Licensing
SynthIQ-7b operates under the LLama2 license, inherited from one of its merged components, uukuguy/speechless-mistral-six-in-one-7b.