The alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16 is a 12 billion parameter language model, converted to MLX format from DavidAU's original model. It supports a 32768 token context length. This model is designed for general text generation and understanding tasks, leveraging its base architecture for broad applicability. Its primary use case is within MLX-powered applications, offering efficient inference on Apple silicon.
No reviews yet. Be the first to review!