Overview
mesolitica/Malaysian-Qwen2.5-7B-Reasoning-SFT is a 7.6 billion parameter model, building upon the mesolitica/Malaysian-Qwen2.5-7B-Instruct base. It has been extensively fine-tuned on a specialized Malaysian Reasoning dataset to significantly improve its analytical and problem-solving skills within a Malaysian context. The model supports a substantial context length of 32768 tokens.
Key Capabilities
- Enhanced Reasoning: Demonstrates improved reasoning across diverse domains including mathematics, science, translation, multiple-choice questions, and coding.
- Malaysian Context Specialization: Specifically trained to understand and generate responses relevant to Malaysian dialects and cultural nuances, including content from Maktabah Al Bakri.
- Dialect Translation: Achieves an average sacrebleu CHRF score of 53.86 for dialect-to-standard Malay translation and 50.24 for standard Malay-to-dialect translation, indicating proficiency in handling regional linguistic variations.
Training Details
The model underwent full parameter fine-tuning using a 12k context length on the mesolitica/Malaysian-Reasoning dataset. Training sessions were tracked via Weights & Biases.
Good For
This model is particularly well-suited for applications requiring strong reasoning abilities and deep understanding of Malaysian linguistic and cultural contexts. It excels in tasks involving complex problem-solving, multilingual translation (especially between Malaysian dialects and standard Malay), and educational content generation tailored for Malaysian users.