Synthia-7B-v1.2 Overview
Synthia-7B-v1.2, developed by migtissera, is a 7 billion parameter language model built upon the Llama-2 architecture. It has undergone fine-tuning using Orca-style datasets, which enhances its capabilities for instruction following and engaging in extended conversations. A key differentiator for this version is its explicit support for generalized Tree of Thought (ToT) and Chain of Thought (CoT) reasoning, which can be activated using a specific system prompt.
Key Capabilities
- Advanced Reasoning: Capable of generalized Tree of Thought and Chain of Thought reasoning for complex problem-solving.
- Instruction Following: Fine-tuned to accurately follow instructions.
- Long-form Conversations: Designed to handle and generate detailed, extended conversational outputs.
- Uncensored Output: The model is uncensored, offering flexibility but requiring cautious use.
Evaluation Highlights
Evaluated on the HuggingFaceH4 Open LLM Leaderboard metrics, Synthia-7B-v1.2 achieved a total average score of 57.97. Notable scores include 54.35 on ARC Challenge and 79.29 on HellaSwag, demonstrating its general language understanding and reasoning abilities.
Good For
- Applications requiring structured, multi-step reasoning.
- Developing conversational agents that need to provide elaborate and coherent explanations.
- Use cases where an uncensored model is preferred for broader content generation, with appropriate safeguards implemented by the user.