alexsherstinsky/Mistral-7B-v0.1-sharded
The alexsherstinsky/Mistral-7B-v0.1-sharded model is a sharded version of Mistral AI's 7 billion parameter pretrained generative text model, Mistral-7B-v0.1. This model utilizes Grouped-Query Attention and Sliding-Window Attention, and outperforms Llama 2 13B across all tested benchmarks. It is designed for general text generation tasks, offering strong performance for its size with an 8192 token context length.
Loading preview...
Model Overview
This model, alexsherstinsky/Mistral-7B-v0.1-sharded, is a sharded variant of the original Mistral-7B-v0.1 Large Language Model developed by Mistral AI. It is a 7 billion parameter pretrained generative text model, specifically designed to reduce RAM requirements during loading by sharding its components into 2GB maximum parts. The model incorporates advanced architectural features such as Grouped-Query Attention and Sliding-Window Attention, alongside a Byte-fallback BPE tokenizer.
Key Capabilities & Performance
- Strong Performance: Mistral-7B-v0.1 demonstrates superior performance compared to Llama 2 13B across all evaluated benchmarks, making it a highly efficient model for its parameter count.
- Efficient Architecture: The integration of Grouped-Query Attention and Sliding-Window Attention contributes to its efficiency and performance.
- Context Length: It supports an 8192 token context window, allowing for processing longer sequences of text.
Use Cases
This model is suitable for a wide range of general text generation tasks where a balance between performance and computational resource efficiency is desired. Its sharded nature makes it particularly useful in environments with memory constraints, enabling easier deployment and experimentation with a powerful 7B parameter model.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.