open-thoughts/OpenThinker2-7B

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face
Overview

OpenThinker2-7B: A Leading 7B Reasoning Model

OpenThinker2-7B is a 7.6 billion parameter model developed by open-thoughts, fine-tuned from Qwen/Qwen2.5-7B-Instruct. It is distinguished as a top-performing 7B open-data reasoning model, achieving strong results across a suite of challenging tasks. The model was trained on the extensive OpenThoughts2-1M dataset, which augments the original OpenThoughts-114k with additional math and code reasoning data generated through advanced methodologies.

Key Capabilities & Performance

  • Advanced Reasoning: Demonstrates strong performance in complex reasoning tasks, including mathematical problem-solving and logical deduction.
  • Competitive Benchmarks: Achieves scores comparable to state-of-the-art 7B models like DeepSeek-R1-Distill-7B on benchmarks such as AIME24 (50.0), AIME25 (33.3), AMC23 (89.5), and MATH500 (88.4).
  • Enhanced Training Data: Benefits from the OpenThoughts2-1M dataset, which incorporates diverse math and code reasoning examples.

Ideal Use Cases

  • Mathematical Problem Solving: Excellent for applications requiring high accuracy in mathematical and scientific reasoning.
  • Complex Logic & Deduction: Suitable for tasks that involve intricate logical analysis and problem-solving.
  • Educational Tools: Can be integrated into platforms for advanced learning and tutoring in STEM fields.

For more details, refer to the OpenThoughts Paper and the OpenThoughts2 and OpenThinker2 Blog Post.