osanseviero/Mistral-7B-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 3, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

Mistral-7B-v0.1 is a 7 billion parameter pretrained generative text model developed by the Mistral AI Team. This transformer-based model incorporates Grouped-Query Attention and Sliding-Window Attention, enabling efficient processing and a 4096-token context length. It notably outperforms Llama 2 13B on all tested benchmarks, making it a strong choice for general text generation tasks where performance and efficiency are critical.

Loading preview...