DarkArtsForge/Morax-24B-v1

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jan 3, 2026Architecture:Transformer0.0K Cold

DarkArtsForge/Morax-24B-v1 is a 24 billion parameter merged language model, based on the Mistral architecture, designed for creative, intelligent, and uncensored content generation. It excels in producing narratives and roleplay, including potentially violent and graphic erotic content, with a 32768 token context length. This model is specifically tuned to have no refusals, making it suitable for diverse and unrestricted text generation tasks.

Loading preview...

Morax-24B-v1: Uncensored Creative Generation

Morax-24B-v1 is a 24 billion parameter merged language model, developed by DarkArtsForge, focusing on creative, intelligent, and uncensored text generation. Built using mergekit with the SLERP method, it combines Naphula/BeaverAI_Fallen-Mistral-Small-3.1-24B-v1e_textonly and TheDrummer/Precog-24B-v1.

Key Characteristics

  • Uncensored Output: Designed to produce content without refusals, including potentially violent and graphic erotic narratives and roleplay.
  • Creative & Intelligent: Aims for high-quality, imaginative text generation.
  • Flexible Thinking: Can be prompted to engage or disengage 'thinking' processes.
  • Mistral V7 Tekken Template: Recommended for optimal performance.

Use Cases

  • Unrestricted Storytelling: Ideal for generating diverse and boundary-pushing narratives.
  • Roleplay Scenarios: Suited for complex and uncensored roleplaying interactions.
  • Content Exploration: Useful for applications requiring models with minimal content restrictions.