AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Warm

AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0 is a 13 billion parameter language model developed by AIdenU, fine-tuned using Direct Preference Optimization (DPO). This model is based on the LLAMA-2 architecture and is specifically optimized for Korean language processing. It is designed for generating high-quality, contextually relevant responses in Korean.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p