teetone/OpenR1-Distill-Qwen3-1.7B-Math
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jan 18, 2026Architecture:Transformer Warm

teetone/OpenR1-Distill-Qwen3-1.7B-Math is a fine-tuned language model based on Qwen3-1.7B-Base, specifically optimized for mathematical reasoning and complex thought processes. It was trained using Supervised Fine-Tuning (SFT) on the open-r1/Mixture-of-Thoughts dataset, enhancing its ability to handle intricate problem-solving. This model is designed for applications requiring robust logical deduction and mathematical understanding, offering specialized performance in a compact 1.7 billion parameter architecture.

Loading preview...