Fmuaddib/Qwen2.5-14B-Instruct-Uncensored-mlx-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 18, 2025License:gpl-3.0Architecture:Transformer0.0K Open Weights Cold
Fmuaddib/Qwen2.5-14B-Instruct-Uncensored-mlx-fp16 is a 14.8 billion parameter instruction-tuned language model, converted to the MLX format for optimized performance on Apple silicon. Based on the Qwen2.5 architecture, this model is designed for general-purpose conversational AI and text generation tasks, offering a substantial context length of 32768 tokens. Its primary use case is providing uncensored, instruction-following responses within the MLX ecosystem.
Loading preview...