migtissera/Synthia-70B-v1.1
migtissera/Synthia-70B-v1.1 is a 69 billion parameter Llama-2-based instruction-tuned language model developed by Migel Tissera. Fine-tuned on Orca-style datasets, it excels at instruction following and long-form conversations, featuring generalized "Tree of Thought" reasoning capabilities. This model is designed for detailed, factual responses and robust conversational AI applications, with a context length of 32768 tokens.
Loading preview...
Synthia-70B-v1.1: An Advanced Instruction-Following LLM
Synthia-70B-v1.1, developed by Migel Tissera, is a 69 billion parameter model built upon the Llama-2 architecture. It has been extensively fine-tuned using Orca-style datasets, making it highly proficient in instruction following and engaging in detailed, long-form conversations. A key differentiator of Synthia is its generalized "Tree of Thought" reasoning capabilities, which can be explicitly evoked via a system message to construct clear, cohesive Chain of Thought reasoning.
Key Capabilities & Features
- Instruction Following: Highly capable of understanding and executing complex instructions.
- Long-Form Conversations: Designed for extended, coherent dialogue.
- Tree of Thought Reasoning: Features advanced reasoning for structured problem-solving.
- Factual & Detailed Responses: Aims to provide accurate and comprehensive answers.
- Uncensored Output: Provides responses without content filtering.
Performance Highlights
Evaluated on the HuggingFaceH4 Open LLM Leaderboard metrics, Synthia-70B-v1.1 demonstrates strong performance across various tasks:
- ARC Challenge: 70.05 acc_norm
- HellaSwag: 87.12 acc_norm
- MMLU: 70.34 acc_norm
- TruthfulQA: 57.84 mc2
- Overall Average: 71.34
Ideal Use Cases
- Applications requiring detailed, multi-turn conversational AI.
- Tasks benefiting from structured reasoning and problem-solving.
- Scenarios where uncensored, factual information is critical.
- Developers seeking a powerful Llama-2 based model for instruction-tuned tasks.