ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDRejected-SFTChosen-Zephyr-7b-beta
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 13, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDRejected-SFTChosen-Zephyr-7b-beta is a 7 billion parameter language model developed by ArianAskari. This model is likely a fine-tuned variant, building upon the Zephyr-7b-beta architecture, and incorporates techniques like Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO). With an 8192 token context length, it is designed for general language understanding and generation tasks, potentially excelling in areas where preference alignment is beneficial.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p