kyujinpy/KOR-Orca-Platypus-13B-v3
KOR-Orca-Platypus-13B-v3 is a 13 billion parameter auto-regressive language model developed by Kyujin Han, based on the LLaMA2 transformer architecture. Fine-tuned using the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset with NEFTune, this model is optimized for Korean language tasks. It demonstrates competitive performance on the KO-LLM leaderboard, making it suitable for various Korean natural language processing applications.
Loading preview...
Overview
KOR-Orca-Platypus-13B-v3 is a 13 billion parameter auto-regressive language model developed by Kyujin Han (kyujinpy) as part of an LLM research consortium. It is built upon the LLaMA2 transformer architecture, specifically using the hyunseoki/ko-en-llama2-13b as its base model. The model was fine-tuned using the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset, incorporating NEFTune during its training process on A100 GPU 40GB.
Key Capabilities
- Korean Language Processing: Optimized for generating and understanding text in Korean.
- LLaMA2 Architecture: Leverages the robust and widely-used LLaMA2 transformer design.
- Benchmark Performance: Achieves an average score of 48.37 on the KO-LLM leaderboard, with specific scores including 43.77 on Ko-ARC, 54.27 on Ko-HellaSwag, 42.66 on Ko-MMLU, 38.58 on Ko-TruthfulQA, and 62.57 on Ko-CommonGen V2.
Good For
- Korean NLP Applications: Ideal for tasks requiring strong performance in the Korean language.
- Research and Development: Suitable for researchers and developers exploring LLaMA2-based models fine-tuned for specific linguistic contexts.
- Benchmarking: Can be used as a reference model for comparing performance against other Korean LLMs on the Open KO-LLM LeaderBoard.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.