sh0ck0r/L3.3-MS-Nevoria-70b-heretic2

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Mar 4, 2026License:eva-llama3.3Architecture:Transformer0.0K Warm

sh0ck0r/L3.3-MS-Nevoria-70b-heretic2 is a 70 billion parameter decensored version of the L3.3-MS-Nevoria-70b model, created by sh0ck0r using the Heretic tool. This model merge combines several Llama 3.3-based models to enhance storytelling, detailed scene descriptions, and prose, while specifically reducing positive bias. It is optimized for creative writing and role-playing scenarios, demonstrating strong adherence to system prompts and character consistency.

Loading preview...

Model Overview

sh0ck0r/L3.3-MS-Nevoria-70b-heretic2 is a 70 billion parameter model, a decensored variant of the original Steelskull/L3.3-MS-Nevoria-70b created using the Heretic v1.2.0 tool. This model is a unique merge of several Llama 3.3-based components, including EVA-LLAMA-0.1 for storytelling, EURYALE-v2.3 for detailed scene descriptions, Anubis-v1 for enhanced prose, and Negative_LLAMA to reduce positive bias, all built upon a Nemotron-lorablated base model.

Key Differentiators & Capabilities

  • Decensored Output: Significantly reduced refusals (18/100 compared to 98/100 in the original model), allowing for more unconstrained responses.
  • Enhanced Creativity & Storytelling: Users report insane creativity, perfect character adherence, and detailed dialogue, excelling in role-playing and adventure scenarios.
  • Reduced Positive Bias: The inclusion of Negative_LLAMA intentionally balances the model's output, preventing overly positive or censored responses.
  • Robust Context Handling: Demonstrated ability to manage complex scenarios with large lorebooks and multiple characters, even across different locations, without confusion.
  • High Token Capacity: Successfully tested with up to 110,000 tokens, maintaining performance up to 70,000 tokens without degradation.

Performance Highlights

  • UGI Score: 56.75, with a Willingness Score of 7.5/10.
  • Open LLM-Benchmark Average: 43.92%, with notable scores in IFEval (69.63%) and BBH (56.60%).

Recommended Use Cases

  • Creative Writing: Ideal for generating imaginative stories, detailed narratives, and immersive prose.
  • Role-Playing (RP): Excels in maintaining character adherence, managing complex interactions, and handling sensitive topics without soft refusals.
  • Unconstrained Content Generation: Suitable for applications requiring less filtered or biased output, particularly in scenarios where the original model might refuse.

This model is particularly well-suited for users seeking a powerful, creative, and less restrictive language model for narrative generation and interactive storytelling.