Featherless-Chat-Models/Mistral-7B-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 8, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Mistral-7B-v0.1 is a 7 billion parameter pretrained generative text model developed by the Mistral AI Team. This transformer model incorporates Grouped-Query Attention and Sliding-Window Attention, enabling it to outperform larger models like Llama 2 13B on various benchmarks. It is designed for general text generation tasks, offering strong performance within a compact parameter count.

Loading preview...