megastudyedu/ME-dpo-7B-v1.0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:cc-by-nc-nd-4.0Architecture:Transformer Open Weights Cold

ME-dpo-7B-v1.0 is a 7 billion parameter causal language model developed by megastudyedu, 프리딕션, and 마이스. It is a DPO-tuned version of the megastudyedu/ME-7B-v1.0 base model, fine-tuned using a translated version of the jondurbin/bagel-v0.3 dataset. This model is specialized for tasks benefiting from direct preference optimization, particularly in Korean language contexts.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p