0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic-v2

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 13, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic-v2 is a 12 billion parameter instruction-tuned causal language model, derived from unsloth/Mistral-Nemo-Instruct-2407. This version has been processed using Heretic v1.2.0 to significantly reduce refusal rates, making it a decensored alternative. With a 32768 token context length, it is optimized for applications requiring less restrictive content generation compared to its original counterpart.

Loading preview...

Model Overview

This model, 0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic-v2, is a 12 billion parameter instruction-tuned language model based on the Mistral-Nemo architecture. It is a modified version of unsloth/Mistral-Nemo-Instruct-2407, specifically processed using the Heretic v1.2.0 tool.

Key Differentiator

The primary distinction of this model is its decensored nature. Through the Heretic process, its refusal rate has been drastically reduced from 88/100 in the original model to 4/100. This makes it suitable for use cases where the original model's content restrictions might be too prohibitive.

Technical Details

The modification process involved adjusting specific abiteration parameters related to attn.o_proj and mlp.down_proj weights. The model maintains a 32768 token context length, characteristic of the Mistral-Nemo family.

Use Cases

This model is particularly well-suited for applications that require a language model with fewer content restrictions and a lower propensity for refusals, while still leveraging the performance of the Mistral-Nemo-Instruct-2407 base.