princeton-nlp/Mistral-7B-Base-SFT-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:May 17, 2024Architecture:Transformer Cold

princeton-nlp/Mistral-7B-Base-SFT-DPO is a 7 billion parameter language model developed by princeton-nlp, based on the Mistral architecture with an 8192-token context length. This model is derived from research on SimPO (Simple Preference Optimization with a Reference-Free Reward), focusing on preference optimization techniques. It is designed for tasks benefiting from advanced alignment methods, offering improved performance in areas where human preferences are critical.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p