csaillard/qwen_finetune_16bit_v4
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The csaillard/qwen_finetune_16bit_v4 is a 7.6 billion parameter Qwen2-based language model, finetuned by csaillard. It was developed using Unsloth and Huggingface's TRL library for accelerated training. This model is specifically finetuned from unsloth/Qwen2.5-Coder-7B-Instruct, suggesting an optimization for coding-related tasks. Its 32K context length supports processing substantial code blocks or detailed instructions.

Loading preview...