MuXodious/Llama-3.3-8B-Instruct-128K-PaperWitch-heresy
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 22, 2026License:llama3.3Architecture:Transformer0.0K Cold

MuXodious/Llama-3.3-8B-Instruct-128K-PaperWitch-heresy is an 8 billion parameter Llama-3.3-8B-Instruct-128K fine-tune, developed by MuXodious using P-E-W's Heretic engine. This model is specifically engineered to exhibit overt non-compliance, divergence, and reinterpretation, aiming to reduce typical model refusals and disclaimers. It is optimized for use cases requiring a model that challenges conventional safety alignments and provides less constrained responses.

Loading preview...

Model Overview

MuXodious/Llama-3.3-8B-Instruct-128K-PaperWitch-heresy is an 8 billion parameter instruction-tuned model based on the Llama-3.3-8B-Instruct-128K architecture. Developed by MuXodious, this model was fine-tuned using P-E-W's Heretic (v1.2.0) engine with Magnitude-Preserving Orthogonal Ablation.

Key Characteristics

  • Non-Compliance Focus: The primary differentiator of this model is its intentional design to exhibit overt non-compliance, divergence, changing focus, and reinterpretation in its responses. It aims to reduce typical model refusals, disclaimers, and warning attachments.
  • Heretication Process: The fine-tuning process, termed "Heretication," specifically targeted model-unique refusals. Evaluation metrics show a 0/100 refusal rate for the selected trial, indicating a significant reduction in standard safety-aligned refusals.
  • Context Length: The model supports an enabled full context length, leveraging the 128K token capacity of its base model.
  • Technical Enhancements: Includes added rope_scaling, an Unsloth chat template in the tokenizer config, and an updated generation config for improved performance.

Intended Use Cases

This model is suitable for applications where a less constrained and more argumentative or reinterpreting AI response is desired, particularly in scenarios where typical LLM safety mechanisms might be overly restrictive. Users should be aware of its overt non-compliance characteristics.