Open4bits/Qwen3-14B-Base-mlx-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Open4bits/Qwen3-14B-Base-mlx-fp16 is a 14-billion parameter Qwen3-Base model converted to MLX format with FP16 precision. This model, developed by Open4bits, is designed for efficient high-performance inference with reduced memory usage and broad hardware compatibility. It excels at general understanding, reasoning, and instruction following, making it suitable for high-performance text generation and conversational applications.

Loading preview...