Model Overview
The speechlessai/speechless-mistral-six-in-one-7b-orth-1.0 is a 7 billion parameter language model built upon the Mistral-7B architecture. It represents an orthogonal modification of the speechless-mistral-six-in-one-7b model, which itself is a merge of six prominent Mistral-7B based models: ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16, ehartford/samantha-1.2-mistral-7b, iteknium/CollectiveCognition-v1.1-Mistral-7B, and HuggingFaceH4/zephyr-7b-alpha.
Key Characteristics
- Orthogonal Modification: The model's weights are adjusted in a direction orthogonal to the original weight direction during fine-tuning, aiming to retain the base model's structure while incorporating fine-tuning benefits.
- Merged Foundation: Benefits from the combined strengths of six high-performing Mistral-7B models, enhancing its general capabilities.
- Strong Conversational Abilities: A community benchmark rated the model highly across intellect, creativity, adaptability, communication, and problem-solving, scoring 98/100 overall against LLaMa2 70B chat.
Performance Highlights
- LM-Evaluation-Harness: Achieves an average score of 53.38 on the Open LLM Leaderboard, with notable scores in HellaSwag (84.6) and MMLU (63.29).
- Code Capabilities: While specific
humaneval-python scores are not provided for this version, the base Mistral-7B-v0.1 scored 30.488, indicating a foundational capability in code generation.
Use Cases
This model is well-suited for applications requiring strong general-purpose language understanding and generation, complex reasoning, creative text generation, and engaging conversational AI.