sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 2, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

OpenHermes-2.5-Mistral-7B-mt-bench-DPO by sonthenguyen is a 7 billion parameter causal language model, fine-tuned using DPO on the Mistral-7B architecture. This model is optimized for conversational AI and instruction following, leveraging a 4096-token context length. It is particularly suited for tasks requiring nuanced responses and adherence to specific instructions.

Loading preview...