DarkArtsForge/Magistaroth-24B-v1.2

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 23, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Loading

Magistaroth-24B-v1.2 by DarkArtsForge is a 24 billion parameter causal language model based on the Mistral architecture, merged using the DELLA method from 2509 finetunes. This model is specifically designed for highly creative, uncensored narrative generation and roleplay, capable of producing graphic and violent content without requiring jailbreaks or ablations. Its primary use case is for applications demanding unrestricted creative writing and storytelling.

Loading preview...

Magistaroth 24B v1.2 Overview

Magistaroth 24B v1.2 is a 24 billion parameter language model developed by DarkArtsForge, built upon the MistralForCausalLM architecture. This version is a highly creative merge, utilizing the DELLA merge method from 2509 finetunes. It is notable for being fully uncensored, designed to produce narratives and roleplay content, including potentially violent and graphic erotic material, without requiring post-merge ablations or jailbreaks.

Key Capabilities

  • Uncensored Content Generation: Excels at generating unrestricted narratives and roleplay, including graphic and violent themes.
  • High Creativity: Engineered for highly creative and imaginative text generation.
  • No Jailbreaks Required: Designed to operate without the need for external prompts or modifications to bypass censorship.
  • Mistral Tekken Chat Template: Optimized for use with the Mistral Tekken chat template for best performance.

Good For

  • Creative Writing: Ideal for generating complex, imaginative, and unrestricted stories.
  • Roleplay Scenarios: Suited for dynamic and uncensored roleplaying experiences.
  • Experimental Narrative Development: Useful for developers exploring the boundaries of AI-generated content without inherent censorship.

This v1.2 iteration, while scoring lower on Q0 bench than the original v1, is considered superior to the experimental v1.1 pdq version, offering a balance of cognitive abilities and uncensored output by carefully selecting donor models.