wolfeidau/NeuralHermes-2.5-Mistral-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 24, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

NeuralHermes-2.5-Mistral-7B by wolfeidau is a 7 billion parameter Mistral-based language model fine-tuned using Direct Preference Optimization (DPO). This model specializes in instruction following and conversational tasks, leveraging the Intel/orca_dpo_pairs dataset for its DPO training. It is designed for general-purpose assistant chatbot applications, offering enhanced response quality through preference-based learning.

Loading preview...