yuzhounie/sft_qwen32b
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 4, 2026License:otherArchitecture:Transformer Cold

The yuzhounie/sft_qwen32b model is a 32.8 billion parameter language model, fine-tuned from Qwen/Qwen2.5-Coder-32B-Instruct. It was trained on the tb3000_agent_diverse_real dataset over 5 epochs. This model is specifically optimized for agent-based tasks and real-world diverse scenarios, leveraging its large parameter count for complex instruction following.

Loading preview...