Jason-hu/Qwen2.5-3B-GSM8K-GRPO-H200
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Qwen2.5-3B-GSM8K-GRPO-H200 is a 3.1 billion parameter language model developed by Jason-hu, fine-tuned for mathematical reasoning. Built upon the Qwen2.5-3B-Instruct architecture, it leverages LoRA SFT on the GSM8K dataset. This model is specifically optimized for mathematical problem-solving tasks, offering enhanced performance in quantitative reasoning. It supports a context length of 32768 tokens.

Loading preview...