alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16
The alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16 is a 12 billion parameter language model, converted to MLX format from DavidAU's original model. It supports a 32768 token context length. This model is designed for general text generation and understanding tasks, leveraging its base architecture for broad applicability. Its primary use case is within MLX-powered applications, offering efficient inference on Apple silicon.
Loading preview...
Overview
This model, alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16, is a 12 billion parameter language model. It has been converted to the MLX format from the original DavidAU/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus model, utilizing mlx-lm version 0.29.1. This conversion enables optimized performance on Apple silicon.
Key Capabilities
- MLX Compatibility: Specifically formatted for use with the MLX framework, ensuring efficient execution on compatible hardware.
- General Text Generation: Capable of generating human-like text based on provided prompts.
- Instruction Following: Designed to respond to instructions, making it suitable for various conversational and task-oriented applications.
Good for
- Developers working with Apple silicon who require an optimized 12B parameter model.
- Applications requiring general-purpose text generation and understanding.
- Experimentation and deployment within the MLX ecosystem.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.