SynthIA-7B-v1.3 Overview
SynthIA-7B-v1.3 is a 7 billion parameter language model developed by migtissera, built upon the Mistral-7B-v0.1 architecture. It has been fine-tuned using Orca-style datasets, optimizing it for robust instruction following and engaging in extended, long-form conversations. A notable feature is its ability to leverage Tree of Thought and Chain of Thought reasoning when prompted with a specific system message, enhancing its logical coherence and elaboration capabilities.
Key Capabilities
- Instruction Following: Excels at understanding and executing complex instructions.
- Long-Form Conversations: Designed for sustained and coherent dialogue.
- Advanced Reasoning: Can be prompted to utilize Tree of Thought and Chain of Thought reasoning for detailed elaborations.
- Uncensored Output: Provides responses without inherent content filtering, requiring careful usage.
Performance Metrics
Evaluated using the EleutherAI Language Model Evaluation Harness, SynthIA-7B-v1.3 demonstrates competitive performance:
- Total Average: 0.6485
- ARC Challenge: 0.6237
- HellaSwag: 0.8349
- MMLU: 0.6232
- TruthfulQA (mc2): 0.5125
Good For
- Applications requiring detailed instruction adherence.
- Building conversational agents that need to maintain context over long interactions.
- Research into advanced reasoning techniques like Tree of Thought and Chain of Thought.
- Use cases where an uncensored model is specifically required, with appropriate safeguards implemented by the user.