bboeun/sft-mistral7b-base-hh-2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 2, 2026Architecture:Transformer Cold
The bboeun/sft-mistral7b-base-hh-2 is a 7 billion parameter language model based on the Mistral architecture. This model is a fine-tuned version, likely optimized for specific conversational or instruction-following tasks, building upon a base Mistral 7B model. Its 4096-token context length supports processing moderately long inputs for various natural language understanding and generation applications. It is intended for general-purpose text generation and understanding, with a focus on improved human-like interaction through supervised fine-tuning.
Loading preview...