TMLR-Group-HF/GT-Qwen3-1.7B-Base-MATH
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Aug 14, 2025License:mitArchitecture:Transformer Open Weights Warm

TMLR-Group-HF/GT-Qwen3-1.7B-Base-MATH is a 1.7 billion parameter Qwen3-based language model developed by TMLR-Group-HF, specifically trained using the GRPO Ground Truth method on a mathematical dataset. With a 40960 token context length, this model is optimized for reasoning and mathematical tasks. Its specialized training makes it particularly suitable for applications requiring robust mathematical problem-solving capabilities.

Loading preview...