kerolos1/Mistral-7B-Instruct-v0.1-Full-Final
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
kerolos1/Mistral-7B-Instruct-v0.1-Full-Final is an instruction-tuned 7 billion parameter large language model developed by Mistral AI. Based on the Mistral-7B-v0.1 architecture, it utilizes Grouped-Query Attention and Sliding-Window Attention for efficient processing. This model is fine-tuned using publicly available conversation datasets, making it suitable for instruction-following tasks and general conversational AI applications. It supports a 4096-token context length and is designed for quick demonstration of fine-tuning capabilities.
Loading preview...