Yuma42/KangalKhan-Sapphire-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 15, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

KangalKhan-Sapphire-7B is a 7 billion parameter language model developed by Yuma42, created by merging argilla/CapybaraHermes-2.5-Mistral-7B and argilla/distilabeled-OpenHermes-2.5-Mistral-7B using slerp. This model, built on the Mistral architecture, features a 4096-token context length and demonstrates strong general-purpose performance across various benchmarks, including reasoning, common sense, and language understanding tasks. It is suitable for applications requiring robust conversational AI and text generation capabilities.

Loading preview...