GAI-LLM/ko-en-llama2-13b-mixed-v5
GAI-LLM/ko-en-llama2-13b-mixed-v5 is a 13 billion parameter auto-regressive language model developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim (SK C&C G.AI Eng). Based on the LLaMA2 transformer architecture, this model is fine-tuned for mixed Korean and English language tasks. It leverages a combined Open Korean Dataset and is designed for text-only input and output generation.
Loading preview...
Overview
GAI-LLM/ko-en-llama2-13b-mixed-v5 is a 13 billion parameter auto-regressive language model developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim from SK C&C G.AI Eng. It is built upon the LLaMA2 transformer architecture, specifically fine-tuned from the hyunseoki/ko-en-llama2-13b base model.
Key Capabilities
- Bilingual Proficiency: Optimized for processing and generating text in both Korean and English.
- Text Generation: Capable of generating coherent and contextually relevant text based on given prompts.
- Mixed-Strategy Training: Utilizes a combined Open Korean Dataset with a mixed-strategy approach during training on A100 GPUs.
Benchmarking
Performance of this model can be tracked and compared on the Open KO-LLM LeaderBoard, indicating its standing among other Korean Large Language Models.
Intended Use
This model is suitable for applications requiring strong performance in mixed Korean and English language understanding and generation, particularly in contexts where a LLaMA2-based architecture is preferred.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.