rdhawk/TR_TaskSpesificLM
The rdhawk/TR_TaskSpesificLM is a 9 billion parameter language model developed by rdhawk, fine-tuned specifically for Turkish language tasks. Based on the google/gemma-2-9b-it architecture, it leverages a 16384-token context length. This model is optimized for handling diverse Turkish question formats, including open-ended, multiple-choice, and fill-in-the-blank queries.
Loading preview...
Model Overview
The rdhawk/TR_TaskSpesificLM is a 9 billion parameter language model developed by rdhawk, derived from the google/gemma-2-9b-it base model. It has been specifically fine-tuned for the Turkish language, utilizing the unsloth framework to enhance its performance on various Turkish natural language processing tasks. The model supports a context length of 16384 tokens.
Key Capabilities
- Turkish Language Specialization: Optimized for understanding and generating text in Turkish.
- Diverse Question Answering: Trained on a mixed dataset of 50,000 rows, enabling it to handle:
- Open-ended questions
- Multiple-choice questions
- Fill-in-the-blank questions
Training Details
The model underwent approximately 20 hours of training on an NVIDIA A6000 GPU. Its training data comprised a comprehensive dataset designed to cover a wide array of question formats, making it suitable for various Turkish-specific applications requiring nuanced understanding and response generation.