HarunYigit/gemma3-4b-turkish-thinking

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Jan 8, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

HarunYigit/gemma3-4b-turkish-thinking is a 4.3 billion parameter Gemma3-4B language model, fine-tuned by HarunYigit, specifically optimized for Turkish reasoning and Chain-of-Thought style responses. It excels at instruction-following and logical reasoning in Turkish, providing stable final answer formats suitable for multiple-choice benchmarks. The model is released in GGUF format, ensuring compatibility and optimized inference with Ollama and llama.cpp.

Loading preview...

Gemma3-4B Turkish Reasoning Model

This model, developed by HarunYigit, is a Gemma3-4B language model specifically fine-tuned for Turkish reasoning and Chain-of-Thought (CoT) style responses. It focuses on enhancing the model's ability to follow instructions and perform logical reasoning in Turkish, while maintaining stable and evaluable outputs.

Key Capabilities

  • Turkish Chain-of-Thought Reasoning: Designed to generate step-by-step reasoning in Turkish.
  • Instruction-Following: Optimized for accurate and consistent adherence to Turkish instructions.
  • Stable Output Formatting: Provides reliable "final answer" formats, making it suitable for automated evaluation.
  • Benchmark Compatibility: Structured for effective use with multiple-choice benchmark tests.
  • Inference Optimization: Released in GGUF format, ensuring full compatibility and optimized performance with popular inference engines like Ollama and llama.cpp.

Good for

  • Applications requiring advanced Turkish logical reasoning.
  • Developing Turkish chatbots or assistants that need to explain their thought process.
  • Evaluating LLM performance on Turkish instruction-following and reasoning tasks.
  • Deploying efficient Turkish language models on local hardware using Ollama or llama.cpp.