emna04/mathtutor-qwen2.5-math-7b-merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
emna04/mathtutor-qwen2.5-math-7b-merged is a 7.6 billion parameter language model based on the Qwen2.5 architecture, specifically fine-tuned for mathematical tasks. This model integrates a LoRA adapter with the unsloth/Qwen2.5-Math-7B base model, leveraging a 32768 token context length. It is designed to excel in mathematical reasoning and problem-solving applications.
Loading preview...