eren23/OGNO-7b-dpo-truthful
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
eren23/OGNO-7b-dpo-truthful is a 7 billion parameter language model, DPO fine-tuned from paulml/OGNO-7B, which is a Mistral 7B variant. This model is specifically optimized for truthfulness, achieving 76.61% on TruthfulQA (0-shot). It demonstrates strong general reasoning capabilities with an average score of 76.14 across various benchmarks, making it suitable for applications requiring factual accuracy and robust understanding.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p