Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Harsha-Hermes-2.5-Mistral-7B is a 7 billion parameter language model, developed by Inforup982, built upon the Mistral-7B architecture. This model is a DPO fine-tune of teknium/OpenHermes-2.5-Mistral-7B, utilizing the Intel/orca_dpo_pairs preference dataset. It is optimized for conversational and instruction-following tasks, leveraging direct preference optimization for enhanced response quality.

Loading preview...