ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 13, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta is a 7 billion parameter language model developed by ArianAskari, built upon the Zephyr architecture. This model is fine-tuned using a combination of Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with specific rejection and chosen datasets, aiming for improved response quality. It is designed for general language generation tasks, leveraging its 8192-token context length for coherent and extended outputs.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p