mlabonne/alpagasus-2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

The mlabonne/alpagasus-2-7b is a 7 billion parameter Llama-2-7b-hf model fine-tuned by mlabonne using QLoRA (4-bit precision). It was trained on a high-quality 9k sample subset of the Alpaca dataset, optimized for instruction following tasks. This model is designed for efficient deployment and inference on consumer-grade hardware, offering strong performance for general-purpose language generation.

Loading preview...