M-Alkassem/qwen2.5-coder-3b-final-merged
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

M-Alkassem/qwen2.5-coder-3b-final-merged is a 3.1 billion parameter Qwen2.5-Coder-3B-Instruct based model developed by M-Alkassem, fine-tuned for agent-oriented coding workflows. This model, with a 32768 token context length, is optimized for constrained tool-using scenarios and excels as the reasoning core for lightweight coding agents. It was created through a two-stage adaptation pipeline, focusing on coding-focused fine-tuning followed by agent-oriented continued fine-tuning. Its primary strength lies in its ability to identify bugs, rewrite code, and manage test cycles within an agentic framework.

Loading preview...