sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-recovered
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 5, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-recovered model is a 7 billion parameter causal language model based on the Mistral architecture, fine-tuned using DPO. This model was trained with specific LoRA configurations and optimized for conversational tasks. It is designed for applications requiring nuanced responses and improved alignment through DPO.

Loading preview...