Edentns/DataVortexS-10.7B-dpo-v1.2
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kLicense:cc-by-nc-sa-4.0Architecture:Transformer Open Weights Warm

Edentns/DataVortexS-10.7B-dpo-v1.2 is a 10.7 billion parameter language model developed by Kwangseok Yang, Jeongwon Choi, Seunghyun Choi, and Hyoseok Choi. Built upon the megastudy/M-SOLAR-10.7B-v1.3 base model, it is fine-tuned using DPO and optimized for conversational AI tasks, particularly excelling in Korean language understanding and generation. The model features a 4096-token context length and demonstrates strong performance on Korean language benchmarks like Ko LM Eval Harness and Ko-LLM-Leaderboard.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p