DavidAU/MN-CaptainErisNebula-Chimera-v1.1-THINKING-ClaudeOpus4.5-12B-heretic-uncensored

Warm
Public
12B
FP8
32768
Hugging Face
Overview

Model Overview

DavidAU/MN-CaptainErisNebula-Chimera-v1.1-THINKING-ClaudeOpus4.5-12B-heretic-uncensored is a 12 billion parameter model built on the Mistral Nemo architecture, developed by DavidAU. This iteration is a "heretic" (decensored) version, specifically fine-tuned using the TeichAI/claude-4.5-opus-high-reasoning-250x dataset to imbue it with advanced "Claude Opus 4.5"-level reasoning and thinking capabilities. It boasts a significantly reduced refusal rate (4/100 compared to 91/100 for the original model) and is designed to follow instructions without censorship.

Key Capabilities

  • Advanced Reasoning: Enhanced with "Claude Opus 4.5" thinking and reasoning, allowing for more complex and structured outputs, as demonstrated by its self-generated thinking processes.
  • Uncensored Generation: This "heretic" version is explicitly designed to be unfiltered, NSFW, and capable of generating vivid, intense, and visceral content, including horror, swearing, and explicit themes, without refusal.
  • High Context Length: Supports a maximum context of 1 million tokens, with 128k to 256k suggested for optimal performance.
  • Instruction Following: "Does what it is told, no questions asked... nothing is off limits," providing direct and compliant responses.

Good For

  • Creative Writing & Roleplay: Excels in generating detailed, immersive, and uncensored narratives, particularly for genres like horror, dark fantasy, or any scenario requiring explicit or intense descriptions.
  • Complex Problem Solving: Its integrated reasoning capabilities make it suitable for tasks requiring structured thought processes and logical deduction.
  • Unrestricted Content Generation: Ideal for use cases where content filtering or refusal is undesirable, offering complete freedom in output generation.

For optimal performance, users are advised to use specific settings (Temp .7, rep pen 1.05, topp: .95, minp .05, topk: 40) and consider quantizations like Q4KS or IQ3_M. Smoothing factors (e.g., 1.5) are also recommended for chat and roleplay applications.