dimodimodimo/Mistral-7B-Instruct-v0.2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The dimodimodimo/Mistral-7B-Instruct-v0.2 is an instruction fine-tuned 7 billion parameter causal language model developed by Mistral AI. It is based on the Mistral-7B-v0.2 architecture, featuring a 32k token context window and an updated Rope-theta value of 1e6. This model is optimized for following instructions and generating coherent text, making it suitable for various conversational and instruction-based AI applications.

Loading preview...