IntelLabs/sqft-mistral-7b-v0.3-50-base
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 25, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The IntelLabs/sqft-mistral-7b-v0.3-50-base model is a 7 billion parameter language model developed by IntelLabs, derived from Mistral-7B-v0.3. It incorporates a 50% sparsity using the Wanda pruning method, focusing on efficient model adaptation. This base model is designed for low-cost model adaptation within low-precision sparse foundation models, making it suitable for resource-constrained environments.

Loading preview...