zamanayaz/qwen2.5_0.5b_langjson_finetune_16bit
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Feb 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The zamanayaz/qwen2.5_0.5b_langjson_finetune_16bit model is a 0.5 billion parameter Qwen2.5-Instruct variant, developed by zamanayaz. This model is specifically fine-tuned for language detection tasks, designed to output responses exclusively in a JSON format indicating either 'english' or 'roman_urdu'. It was trained using Unsloth and Huggingface's TRL library, optimizing for faster training times.
Loading preview...
Model Overview
The zamanayaz/qwen2.5_0.5b_langjson_finetune_16bit is a specialized language model developed by zamanayaz, fine-tuned from the unsloth/Qwen2.5-0.5B-Instruct base. This model is designed with a singular focus: language detection, specifically for English and Roman Urdu.
Key Capabilities
- Language Detection: Primarily functions as a language detection assistant.
- JSON Output: Responds exclusively in a structured JSON format, indicating the detected language (e.g.,
{"language":"english"}or{"language":"roman_urdu"}). - Optimized Training: Leverages Unsloth and Huggingface's TRL library for accelerated training, resulting in 2x faster fine-tuning.
- System Prompt: Utilizes a strict system prompt: "You are a language detection assistant. Respond only in JSON: {"language":"english"} or {"language":"roman_urdu"}."
When to Use This Model
This model is ideal for applications requiring:
- Automated Language Identification: Specifically for distinguishing between English and Roman Urdu.
- Structured Output: When downstream systems require language detection results in a consistent JSON format.
- Lightweight Deployment: Its 0.5 billion parameter size makes it suitable for environments where computational resources are a consideration.