invincible-jha/SynLogic-7B
SynLogic-7B is a 7.6 billion parameter logical reasoning model developed by MiniMax-AI, built upon Qwen2.5-7B-Base and trained using reinforcement learning on the SynLogic dataset. It is specifically optimized for complex logical reasoning tasks, demonstrating strong generalization to mathematical problem-solving without explicit math training. The model achieves notable performance improvements on logical reasoning benchmarks like KOR-Bench and shows enhanced mathematical capabilities on AIME 2024 and AMC 2023.
Loading preview...
SynLogic-7B: Enhanced Logical Reasoning Model
SynLogic-7B is a 7.6 billion parameter model from MiniMax-AI, fine-tuned from Qwen2.5-7B-Base using reinforcement learning on a specialized dataset. Its core strength lies in comprehensive logical reasoning, trained across 27 diverse tasks including Sudoku and Game of 24. A key differentiator is its ability to generalize logical skills to mathematical domains, outperforming its base and instruct counterparts on benchmarks like AIME 2024 and AMC 2023, despite not being explicitly trained on math.
Key Capabilities & Features
- Superior Logical Reasoning: Achieves a +9.5 point improvement over Qwen2.5-7B-Instruct on KOR-Bench.
- Mathematical Generalization: Demonstrates strong transfer learning to math problems, scoring 10.0% on AIME 2024.
- Verifiable Training: Utilizes a unique training approach with automatically verifiable data, enabling effective reinforcement learning.
- Efficient Scale: Delivers robust performance with a compact 7B parameter count, making it efficient for deployment.
When to Use SynLogic-7B
- Complex Logical Puzzles: Ideal for applications requiring advanced logical deduction and problem-solving.
- Mathematical Reasoning: Suitable for tasks that benefit from logical inference applied to mathematical contexts.
- Resource-Constrained Environments: A strong choice for scenarios where a smaller, yet powerful, reasoning model is preferred.