GAI-LLM/ko-en-llama2-13b-mixed-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:cc-by-nc-2.0Architecture:Transformer Open Weights Warm

GAI-LLM/ko-en-llama2-13b-mixed-v2 is a 13 billion parameter auto-regressive language model developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim (SK C&C G.AI Eng), based on the LLaMA2 transformer architecture. This model is specifically fine-tuned for mixed Korean and English language processing, leveraging a combination of Open Korean Datasets. It is designed to excel in tasks requiring understanding and generation in both Korean and English contexts.

Loading preview...

Overview

GAI-LLM/ko-en-llama2-13b-mixed-v2 is a 13 billion parameter auto-regressive language model built upon the LLaMA2 transformer architecture. Developed by Donghoon Oh, Hanmin Myung, and Eunyoung Kim from SK C&C G.AI Eng, this model is specifically designed for mixed Korean and English language processing.

Key Capabilities

  • Bilingual Proficiency: Optimized for understanding and generating text in both Korean and English.
  • Llama2 Base: Leverages the robust Llama2 architecture for strong foundational language capabilities.
  • Specialized Training: Fine-tuned on a combination of Open Korean Datasets, including Kopen-platypus, Everythinglm v2, koalpaca_v1.1, and koCoT2000, using A100 GPUs.

Use Cases

This model is particularly well-suited for applications requiring:

  • Korean-English Translation: Tasks involving translation or cross-lingual understanding between Korean and English.
  • Bilingual Content Generation: Creating text in either Korean or English, or mixed-language content.
  • Research and Development: As a base for further fine-tuning on specific Korean-English NLP tasks.

Performance can be tracked on the Open KO-LLM LeaderBoard.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p