sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-reversed_corrupted
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 15, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-reversed_corrupted model is a 7 billion parameter language model based on the Mistral architecture. This model is noted for its specific fine-tuning, indicated by "mt-bench-DPO-reversed_corrupted," suggesting an experimental or specialized training approach. Its primary differentiator lies in this unique fine-tuning, which may target specific response patterns or robustness against certain inputs. Developers might consider this model for use cases requiring a Mistral-7B base with potentially altered or robust conversational characteristics.

Loading preview...