zamanayaz/qwen2.5_0.5b_langjson_finetune_16bit
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Feb 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The zamanayaz/qwen2.5_0.5b_langjson_finetune_16bit model is a 0.5 billion parameter Qwen2.5-Instruct variant, developed by zamanayaz. This model is specifically fine-tuned for language detection tasks, designed to output responses exclusively in a JSON format indicating either 'english' or 'roman_urdu'. It was trained using Unsloth and Huggingface's TRL library, optimizing for faster training times.

Loading preview...