open-thoughts/OpenThinker2-7B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 3, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

OpenThinker2-7B is a 7.6 billion parameter instruction-tuned language model developed by open-thoughts, fine-tuned from Qwen2.5-7B-Instruct. It is specifically optimized for reasoning tasks, demonstrating performance comparable to other state-of-the-art 7B models on benchmarks like AIME24, AMC23, and MATH500. This model excels in complex problem-solving and mathematical reasoning, making it suitable for applications requiring advanced analytical capabilities.

Loading preview...

OpenThinker2-7B: A Leading 7B Reasoning Model

OpenThinker2-7B is a 7.6 billion parameter model developed by open-thoughts, fine-tuned from Qwen/Qwen2.5-7B-Instruct. It is distinguished as a top-performing 7B open-data reasoning model, achieving strong results across a suite of challenging tasks. The model was trained on the extensive OpenThoughts2-1M dataset, which augments the original OpenThoughts-114k with additional math and code reasoning data generated through advanced methodologies.

Key Capabilities & Performance

  • Advanced Reasoning: Demonstrates strong performance in complex reasoning tasks, including mathematical problem-solving and logical deduction.
  • Competitive Benchmarks: Achieves scores comparable to state-of-the-art 7B models like DeepSeek-R1-Distill-7B on benchmarks such as AIME24 (50.0), AIME25 (33.3), AMC23 (89.5), and MATH500 (88.4).
  • Enhanced Training Data: Benefits from the OpenThoughts2-1M dataset, which incorporates diverse math and code reasoning examples.

Ideal Use Cases

  • Mathematical Problem Solving: Excellent for applications requiring high accuracy in mathematical and scientific reasoning.
  • Complex Logic & Deduction: Suitable for tasks that involve intricate logical analysis and problem-solving.
  • Educational Tools: Can be integrated into platforms for advanced learning and tutoring in STEM fields.

For more details, refer to the OpenThoughts Paper and the OpenThoughts2 and OpenThinker2 Blog Post.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p