Overview
neuralwork/gemma-2-9b-it-tr is a 9 billion parameter instruction-tuned language model, building upon the google/gemma-2-9b-it base model. It has been specifically fine-tuned for the Turkish language, focusing on enhancing its capabilities in question answering and conversational interactions.
Key Capabilities
- Turkish Language Proficiency: Optimized for understanding and generating responses in Turkish.
- Question Answering: Excels at providing answers to Turkish queries.
- Conversational AI: Designed for engaging in natural and coherent Turkish conversations.
- Improved Reasoning: Demonstrates superior reasoning skills in Turkish compared to its base model.
Training Details
The model was fine-tuned using LoRA (rank=128, lora_alpha=64) over 4 days on a single RTX 6000 Ada GPU. The training dataset comprised a carefully curated and manually filtered collection of 55,000 Turkish question-answering and conversational samples, sourced from a filtered version of metedb/turkish_llm_datasets and an additional private dataset of 8,000 conversational samples.
Good For
- Applications requiring high-quality Turkish language understanding and generation.
- Building Turkish chatbots or conversational agents.
- Developing Turkish-specific question-answering systems.