argilla/distilabeled-OpenHermes-2.5-Mistral-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 9, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The argilla/distilabeled-OpenHermes-2.5-Mistral-7B is a 7 billion parameter language model, fine-tuned using Direct Preference Optimization (DPO) on a refined version of the Intel/orca_dpo_pairs dataset. Developed by Argilla, this model is based on the OpenHermes-2.5-Mistral-7B architecture and is specifically optimized for improved response quality and alignment. It demonstrates enhanced performance across various benchmarks, making it suitable for general-purpose conversational AI and instruction-following tasks.

Loading preview...