mahernaija/qwen25-32b-nemotron-finetuned
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

mahernaija/qwen25-32b-nemotron-finetuned is a 32.8 billion parameter language model, a full fine-tune of Qwen/Qwen2.5-32B by mahernaija. It is specifically optimized for step-by-step reasoning across math, code, and science problems, incorporating `` traces. This model excels at generating detailed reasoning processes, showing significant ROUGE-L improvements in these domains while largely preserving general knowledge benchmarks.

Loading preview...