ogulcanaydogan/Turkish-LLM-14B-Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 4, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Turkish-LLM-14B-Instruct by ogulcanaydogan is a 14 billion parameter instruction-tuned language model, fine-tuned from Qwen2.5-14B-Instruct using QLoRA on 242K Turkish instruction examples. This model is specifically optimized for Turkish language tasks, demonstrating a performance improvement of +0.30 on MMLU-TR compared to its base model. It is designed for applications requiring strong Turkish language understanding and generation capabilities.

Loading preview...

Turkish-LLM-14B-Instruct Overview

ogulcanaydogan/Turkish-LLM-14B-Instruct is a 14 billion parameter model specifically enhanced for the Turkish language. It is fine-tuned from the Qwen2.5-14B-Instruct base model using QLoRA (4-bit NF4) on a substantial dataset of 242,000 Turkish instruction examples. This targeted fine-tuning significantly improves its performance on Turkish-specific tasks.

Key Capabilities and Features

  • Turkish Language Optimization: Demonstrates improved performance on Turkish benchmarks, specifically outperforming the base Qwen2.5-14B-Instruct model by +0.30 on MMLU-TR.
  • Efficient Fine-tuning: Utilizes QLoRA with a rank of 32 and alpha of 64, making it an efficient adaptation of a larger base model.
  • Accessibility: Available with GGUF quantizations (Q4, Q5, Q8, F16) for deployment on various hardware, including local machines via Ollama.
  • Part of a Family: Belongs to the broader Turkish LLM Family, offering different sizes for diverse needs.

Ideal Use Cases

  • Turkish Instruction Following: Excels in scenarios requiring the model to understand and respond to instructions in Turkish.
  • Turkish NLP Applications: Suitable for a wide range of natural language processing tasks in Turkish, such as text generation, summarization, and question answering.
  • Resource-Efficient Deployment: The availability of GGUF versions makes it a strong candidate for applications with moderate hardware constraints.