sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-original-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 15, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-original-v2 is a 7 billion parameter language model based on the Mistral architecture. This model is a fine-tuned variant, likely optimized for conversational AI and instruction following, given its DPO (Direct Preference Optimization) training and 'mt-bench' in its name, which often indicates evaluation on multi-turn benchmarks. Its primary strength lies in generating human-like text responses for interactive applications.

Loading preview...