llmfan46/Mistral-Small-3.2-24B-Instruct-2506-ultra-uncensored-heretic

Hugging Face
VISIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 19, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The llmfan46/Mistral-Small-3.2-24B-Instruct-2506-ultra-uncensored-heretic model is a decensored version of Mistral AI's Mistral-Small-3.2-24B-Instruct-2506, created by llmfan46 using the Heretic v1.2.0 tool with Arbitrary-Rank Ablation (ARA) method. This model significantly reduces content refusals by 98% (2/100 vs. 98/100 for the original) while preserving the original model's quality with a low KL divergence of 0.0369. It is optimized for instruction following, function calling, and vision reasoning, making it suitable for applications requiring less restrictive content generation and robust task execution.

Loading preview...

Model Overview

This model, llmfan46/Mistral-Small-3.2-24B-Instruct-2506-ultra-uncensored-heretic, is a decensored variant of Mistral AI's Mistral-Small-3.2-24B-Instruct-2506. It was developed by llmfan46 using the Heretic v1.2.0 tool, specifically employing the Arbitrary-Rank Ablation (ARA) method to modify the original model's behavior.

Key Differentiators

  • Significantly Reduced Refusals: Achieves a 98% reduction in content refusals, with only 2 out of 100 prompts resulting in refusal, compared to 98 out of 100 for the original model. This makes it highly suitable for use cases requiring less restrictive content generation.
  • Quality Preservation: Despite decensoring, the model maintains high quality, exhibiting a low KL divergence of 0.0369 from the original model. This indicates that its core capabilities and coherence are largely preserved.

Core Capabilities (inherited from Mistral-Small-3.2-24B-Instruct-2506)

  • Enhanced Instruction Following: Improves upon its predecessor in accurately following precise instructions.
  • Reduced Repetition Errors: Produces fewer infinite generations or repetitive answers, leading to more concise and relevant outputs.
  • Robust Function Calling: Features a more robust function calling template, excelling in tool-use tasks.
  • Vision Reasoning: Capable of processing and reasoning with image inputs, as demonstrated by examples involving image analysis for decision-making.

Performance Highlights

  • Instruction Following: Achieves 65.33% on Wildbench v2 and 43.1% on Arena Hard v2, showing significant improvements over the 3.1 version.
  • STEM Benchmarks: Demonstrates strong performance in STEM tasks, with 69.06% on MMLU Pro (5-shot CoT), 78.33% on MBPP Plus - Pass@5, and 92.90% on HumanEval Plus - Pass@5.

Ideal Use Cases

This model is particularly well-suited for developers and applications that require a powerful, instruction-tuned language model with minimal content restrictions, while still benefiting from strong instruction following, function calling, and multimodal (vision) capabilities.