open-r1/OpenR1-Qwen-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 5, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

OpenR1-Qwen-7B is a 7.6 billion parameter language model developed by open-r1, fine-tuned from Qwen2.5-Math-Instruct. This model specializes in mathematical reasoning, trained on the OpenR1-220k-Math dataset. It features an extended context length of 131072 tokens, making it suitable for complex mathematical problem-solving.

Loading preview...