0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic
The 0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic is a 12 billion parameter instruction-tuned causal language model, derived from unsloth/Mistral-Nemo-Instruct-2407 and processed with Heretic v1.2.0. This model is specifically engineered to be a 'decensored' version, demonstrating significantly reduced refusal rates compared to its original counterpart. With a 32768 token context length, it is optimized for applications requiring less restrictive content generation and direct responses.
Loading preview...
Model Overview
This model, 0xA50C1A1/Mistral-Nemo-Instruct-2407-Heretic, is a 12 billion parameter instruction-tuned variant based on the unsloth/Mistral-Nemo-Instruct-2407 architecture. It has been modified using the Heretic v1.2.0 tool to significantly alter its refusal behavior.
Key Differentiators
- Decensored Output: The primary distinction of this model is its 'decensored' nature, achieved through specific "abliteration parameters" applied during its creation. This results in a drastically lower refusal rate compared to the original model.
- Reduced Refusals: Performance metrics indicate a refusal rate of 6 out of 100 prompts, a substantial reduction from the original model's 88 out of 100 refusals. This makes it suitable for use cases where direct and unfiltered responses are preferred.
- Mistral-Nemo Base: Built upon the Mistral-Nemo-Instruct-2407 foundation, it inherits the general capabilities of that model family, including a 32768 token context length.
Use Cases
This model is particularly well-suited for applications where:
- Unfiltered Content Generation: Direct and less restrictive text generation is required.
- Specific Research: Exploring the impact of 'decensoring' techniques on LLM behavior and output.
- Creative Writing/Roleplay: Scenarios where the base model's inherent refusal mechanisms might hinder creative freedom.