GAI-LLM/ko-en-llama2-13b-mixed-v4
GAI-LLM/ko-en-llama2-13b-mixed-v4 is a 13 billion parameter auto-regressive language model developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim (SK C&C G.AI Eng), based on the LLaMA2 transformer architecture. This model is specifically fine-tuned for mixed Korean and English language processing, leveraging a combination of Open Korean datasets. It is designed for generating text in both Korean and English contexts, building upon the hyunseoki/ko-en-llama2-13b base model.
Loading preview...
Overview
GAI-LLM/ko-en-llama2-13b-mixed-v4 is a 13 billion parameter auto-regressive language model built upon the LLaMA2 transformer architecture. Developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim from SK C&C G.AI Eng, this model is a fine-tuned version of the hyunseoki/ko-en-llama2-13b base model.
Key Capabilities
- Multilingual Text Generation: Optimized for generating text in both Korean and English.
- Mixed-Strategy Training: Utilizes a unique combination of Open Korean datasets, including Kopen-platypus, kaist_cot_deepL, and open_orca-ko, incorporating NIV, FLAN, and TO strategies.
- LLaMA2 Architecture: Benefits from the robust and widely recognized LLaMA2 transformer architecture.
Benchmarking
The model's performance can be tracked and compared on the Open KO-LLM LeaderBoard, indicating its standing among other Korean language models.
Good For
- Applications requiring text generation in mixed Korean and English environments.
- Research and development focusing on multilingual LLMs, particularly for Korean and English language pairs.
- Developers looking for a LLaMA2-based model with specialized Korean language capabilities.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.