AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Warm

AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.1 is a 13 billion parameter language model developed by AIdenU, fine-tuned for Korean language processing. This model is based on the LLAMA-2 architecture and utilizes DPO (Direct Preference Optimization) for enhanced performance. It is specifically designed for generating Korean text and understanding Korean prompts, making it suitable for applications requiring high-quality Korean language generation.

Loading preview...

Model Overview

AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.1 is a 13 billion parameter language model developed by AIdenU. This model is a DPO (Direct Preference Optimization) fine-tuned version of the AIdenU/LLAMA-2-13b-ko-Y24_v2.0 base model, indicating an optimization process to align its outputs with human preferences.

Key Capabilities

  • Korean Language Generation: Optimized for generating coherent and contextually relevant text in Korean.
  • Instruction Following: Demonstrates the ability to follow instructions provided in Korean prompts, as shown in the example generation code.
  • LLAMA-2 Architecture: Built upon the robust LLAMA-2 framework, providing a strong foundation for language understanding and generation.

Good For

  • Korean NLP Applications: Ideal for tasks such as chatbots, content creation, and summarization specifically in the Korean language.
  • Research and Development: Suitable for researchers and developers working on Korean-centric large language models and applications.
  • Customizable Korean AI: Provides a strong base for further fine-tuning on specific Korean datasets or use cases.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p