SeaLLM-7B-v2.5 is an 8.5 billion parameter multilingual large language model developed by SeaLLMs, built upon the Gemma-7b architecture. It is specifically optimized for Southeast Asian languages, demonstrating strong performance across diverse multilingual tasks including world knowledge, math reasoning, and instruction following. This model excels in benchmarks like MMLU, M3Exam, VMLU, GSM8K, and MATH, often outperforming larger models and GPT-3.5 in SEA language contexts.
No reviews yet. Be the first to review!