Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B
TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Aug 7, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B is a 27 billion parameter Korean language model, fine-tuned from the Gemma-2-27B-IT base model by Linkbricks Horizon-AI. It excels in complex Korean logical problem-solving and cross-lingual understanding across Korean, Chinese, English, and Japanese, achieved through SFT and DPO training. This model is particularly enhanced for high-level analysis of customer reviews, social postings, and coding tasks, utilizing a 32768 token context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p