YEqiaosir/Mistral-dolphin-2.8-grok-instract-2-7B-slerp
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
YEqiaosir/Mistral-dolphin-2.8-grok-instract-2-7B-slerp is a 7 billion parameter language model created by YEqiaosir, formed by merging nasiruddin15/Mistral-grok-instract-2-7B-slerp and cognitivecomputations/dolphin-2.8-mistral-7b-v02 using a slerp merge method. This model leverages the Mistral architecture and is designed for general instruction-following tasks, combining the strengths of its base models. It is suitable for applications requiring a capable 7B instruction-tuned model with a 4096 token context length.
Loading preview...