Turkish-LLM-14B-Instruct Overview
ogulcanaydogan/Turkish-LLM-14B-Instruct is a 14 billion parameter model specifically enhanced for the Turkish language. It is fine-tuned from the Qwen2.5-14B-Instruct base model using QLoRA (4-bit NF4) on a substantial dataset of 242,000 Turkish instruction examples. This targeted fine-tuning significantly improves its performance on Turkish-specific tasks.
Key Capabilities and Features
- Turkish Language Optimization: Demonstrates improved performance on Turkish benchmarks, specifically outperforming the base Qwen2.5-14B-Instruct model by +0.30 on MMLU-TR.
- Efficient Fine-tuning: Utilizes QLoRA with a rank of 32 and alpha of 64, making it an efficient adaptation of a larger base model.
- Accessibility: Available with GGUF quantizations (Q4, Q5, Q8, F16) for deployment on various hardware, including local machines via Ollama.
- Part of a Family: Belongs to the broader Turkish LLM Family, offering different sizes for diverse needs.
Ideal Use Cases
- Turkish Instruction Following: Excels in scenarios requiring the model to understand and respond to instructions in Turkish.
- Turkish NLP Applications: Suitable for a wide range of natural language processing tasks in Turkish, such as text generation, summarization, and question answering.
- Resource-Efficient Deployment: The availability of GGUF versions makes it a strong candidate for applications with moderate hardware constraints.