eren23/dpo-binarized-NeutrixOmnibe-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 12, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The eren23/dpo-binarized-NeutrixOmnibe-7B is a 7 billion parameter language model, DPO fine-tuned from Kukedlc/NeuTrixOmniBe-7B-model-remix using the argilla/OpenHermes2.5-dpo-binarized-alpha dataset. This model demonstrates strong general language understanding and reasoning capabilities, achieving an average score of 76.31 on the Open LLM Leaderboard. It is particularly well-suited for tasks requiring robust conversational abilities and instruction following, leveraging its DPO fine-tuning for improved alignment.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p