Fmuaddib/Qwen2.5-14B-Instruct-Uncensored-mlx-fp16
Fmuaddib/Qwen2.5-14B-Instruct-Uncensored-mlx-fp16 is a 14.8 billion parameter instruction-tuned language model, converted to the MLX format for optimized performance on Apple silicon. Based on the Qwen2.5 architecture, this model is designed for general-purpose conversational AI and text generation tasks, offering a substantial context length of 32768 tokens. Its primary use case is providing uncensored, instruction-following responses within the MLX ecosystem.
Loading preview...
Overview
Fmuaddib/Qwen2.5-14B-Instruct-Uncensored-mlx-fp16 is a 14.8 billion parameter instruction-tuned language model, specifically converted to the MLX format for efficient execution on Apple silicon. This model is derived from the Orion-zhen/Qwen2.5-14B-Instruct-Uncensored base, leveraging mlx-lm version 0.22.1 for its conversion. It maintains a substantial context window of 32768 tokens, making it suitable for processing longer inputs and generating detailed responses.
Key Capabilities
- Instruction Following: Designed to accurately follow user instructions for various text generation tasks.
- Uncensored Responses: Provides direct and unfiltered outputs, suitable for applications requiring less restrictive content policies.
- MLX Optimization: Optimized for performance on Apple silicon, offering efficient inference for local deployments.
- Large Context Window: Supports a 32K token context, enabling complex conversations and document processing.
Good for
- Developers working with Apple silicon (Macs with M-series chips) who need a powerful, locally runnable LLM.
- Applications requiring an instruction-tuned model capable of generating uncensored content.
- General-purpose text generation, chatbots, and creative writing where a large context is beneficial.