OpenThinker2-32B is a 32.8 billion parameter instruction-tuned causal language model developed by open-thoughts, fine-tuned from Qwen2.5-32B-Instruct. Optimized for complex reasoning, mathematics, and code, it leverages the OpenThoughts2-1M dataset, which includes augmented math and code reasoning data. This model demonstrates strong performance across various reasoning benchmarks, making it suitable for advanced analytical tasks.
No reviews yet. Be the first to review!