igarin/Qwen2.5-Coder-7B-20260302-MERGED-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 2, 2026License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

The igarin/Qwen2.5-Coder-7B-20260302-MERGED-16bit is a 7.6 billion parameter Qwen2.5-Coder model, fine-tuned by igarin, designed for code-related tasks. It was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. This model is optimized for coding applications, leveraging its 32K context length for processing substantial codebases.

Loading preview...