kyujinpy/KoR-Orca-Platypus-13B
The kyujinpy/KoR-Orca-Platypus-13B is a 13 billion parameter auto-regressive language model developed by Kyujin Han, based on the LLaMA2 transformer architecture. It is specifically fine-tuned for Korean language tasks, combining OpenOrca-KO and KOpen-platypus datasets. This model demonstrates strong performance on Korean language benchmarks, making it suitable for applications requiring robust Korean natural language understanding and generation.
Loading preview...
KoR-Orca-Platypus-13B Overview
KoR-Orca-Platypus-13B is a 13 billion parameter auto-regressive language model built upon the LLaMA2 transformer architecture. Developed by Kyujin Han as part of an LLM research consortium, this model is specifically designed for Korean language processing.
Key Capabilities & Training
- Architecture: Based on the LLaMA2 transformer, providing a robust foundation for language generation.
- Language Focus: Primarily developed for Korean, utilizing a combined dataset of OpenOrca-KO and kyujinpy/KOpen-platypus.
- Performance: Achieves an average score of 50.13 on the Open KO-LLM LeaderBoard, outperforming several comparable models like GenAI-llama2-ko-en-platypus and KoT-Platypus2-13B in overall average.
- Benchmarks: Demonstrates competitive scores across various Korean benchmarks, including Ko-ARC (42.06), Ko-HellaSwag (53.95), Ko-MMLU (42.28), Ko-TruthfulQA (43.55), and Ko-CommonGen V2 (68.78).
Use Cases
This model is well-suited for applications requiring high-quality Korean text generation and understanding. Its strong benchmark performance suggests utility in tasks such as:
- Korean content creation.
- Korean language question answering systems.
- Korean text summarization and analysis.
- Research and development in Korean natural language processing.