lmstudio-community/magistral-small-2506-mlx-bf16
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jun 10, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
The magistral-small model by mistral-ai is a 24 billion parameter language model, specifically the bfloat16 version, optimized for Apple Silicon. This model is provided as an MLX quantization, making it suitable for efficient local inference on Apple hardware. It leverages the MLX framework for performance, targeting developers working within the Apple ecosystem.
Loading preview...