Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Aug 1, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B is an 8 billion parameter Korean language model, fine-tuned by Linkbricks Horizon-AI's Yunsung Ji (Saxo) from the Meta-Llama-3.1-8B-Instruct base model using SFT and DPO. It is trained with Korean-Chinese-English-Japanese cross-training data and logical data to enhance multilingual understanding and complex Korean logical problem-solving. This model is particularly strengthened for high-level analysis of customer reviews, social postings, and coding tasks, featuring a 32K context window.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p