kyujinpy/KO-Platypus2-13B
KO-Platypus2-13B is an auto-regressive language model developed by Kyujin Han (kyujinpy) based on the LLaMA2 transformer architecture. Fine-tuned on the KOpen-platypus dataset, a high-quality Korean translation of Open-Platypus, this model is optimized for Korean language understanding and generation. It demonstrates competitive performance on the Open KO-LLM LeaderBoard, particularly in Korean-specific benchmarks.
Loading preview...
KO-Platypus2-13B: A Korean-Optimized LLaMA2 Model
KO-Platypus2-13B is an auto-regressive language model developed by Kyujin Han (kyujinpy) as part of an LLM research consortium. It is built upon the LLaMA2 transformer architecture, specifically fine-tuned from the hyunseoki/ko-en-llama2-13b base model.
Key Capabilities and Training
- Korean Language Specialization: The model is specifically trained on the KOpen-platypus dataset, which is a high-quality Korean translation of the Open-Platypus dataset, enhancing its performance in Korean language tasks.
- Benchmarked Performance: KO-Platypus2-13B shows strong results on the Open KO-LLM LeaderBoard, achieving an average score of 47.90. It outperforms its base model (
hyunseoki/ko-en-llama2-13b) in Ko-ARC, Ko-MMLU, and Ko-TruthfulQA benchmarks. - Model Architecture: Utilizes the robust LLaMA2 transformer architecture, providing a solid foundation for language generation.
Use Cases
This model is particularly well-suited for applications requiring advanced Korean language processing, such as:
- Korean text generation and understanding
- Research and development in Korean NLP
- Applications benefiting from a strong Korean-specific LLM baseline
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.