alexgusevski/Eva-4B-mlx-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 12, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
alexgusevski/Eva-4B-mlx-fp16 is a 4 billion parameter language model converted to the MLX format from FutureMa/Eva-4B. This model is specifically designed for efficient deployment and inference on Apple Silicon, leveraging the MLX framework. It provides a readily available MLX-optimized version of the Eva-4B architecture for local machine learning applications.
Loading preview...