ailexleon/Cydonia-24B-v3.1-mlx-fp16
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 13, 2026Architecture:Transformer Cold

The ailexleon/Cydonia-24B-v3.1-mlx-fp16 model is a 24 billion parameter language model, converted to the MLX format from TheDrummer/Cydonia-24B-v3.1. This model is designed for efficient deployment and inference on Apple Silicon, leveraging the MLX framework. It supports a context length of 32768 tokens, making it suitable for applications requiring extensive contextual understanding. Its primary utility lies in providing a performant, locally runnable large language model for developers working within the Apple ecosystem.

Loading preview...