luqmanxyz/Maya_Hermes-2.5-Mistral-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

luqmanxyz/Maya_Hermes-2.5-Mistral-7B is a 7 billion parameter DPO fine-tuned variant of the OpenHermes-2.5-Mistral-7B model, utilizing the argilla/distilabel-intel-orca-dpo-pairs dataset. This model is designed for general language tasks, demonstrating strong performance across various reasoning and language understanding benchmarks. With a 4096-token context length, it is suitable for applications requiring robust conversational and analytical capabilities.

Loading preview...