OrpoGemma-2-9B-TR: Turkish Fine-tuned Gemma-2 Model
OrpoGemma-2-9B-TR is a 9 billion parameter language model developed by selimc, representing a Turkish fine-tuned version of Google's Gemma-2-9B-IT. This model was specifically optimized for Turkish language generation and understanding using the ORPO (Odds Ratio Preference Optimization) training technique.
Key Capabilities
- Turkish Language Fluency: Produces coherent, contextually appropriate, and fluent text in Turkish.
- Detailed Response Generation: Capable of delivering informative and comprehensive answers across a wide array of instructions and question types.
- ORPO Fine-tuning: Utilizes the ORPO Trainer on a subset of 1500 rows from the
selimc/orpo-dpo-mix-TR-20k dataset, enhancing its performance in Turkish. - Performance Metrics: Achieves an average score of 55.9% on the OpenLLMTurkishLeaderboard_v0.2 across various Turkish benchmarks, including MMLU_TR_V0.2 (53.0%) and GSM8K_TR_V0.2 (64.8%).
Good For
- Applications requiring high-quality text generation in Turkish.
- Developing chatbots or conversational AI systems for Turkish-speaking users.
- Tasks that benefit from detailed and contextually relevant responses in Turkish.