RicardoEstep/AuroGodSlayerEtherealKrix-12B-Wg

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Dec 25, 2025Architecture:Transformer0.0K Cold

RicardoEstep/AuroGodSlayerEtherealKrix-12B-Wg is a 12 billion parameter experimental Karcher-Mean merge of three uncensored roleplay models, built upon DreadPoor/Krix-12B-Model_Stock. This model is specifically designed for creative writing, worldbuilding, and dialogues with personality, offering a balanced blend of expressiveness, creativity, and a slightly dark, mythic undertone. With a 32768 token context length, it excels in narrative tasks and stylized prose, though it is not fine-tuned for factual or coding applications.

Loading preview...

Overview

RicardoEstep/AuroGodSlayerEtherealKrix-12B-Wg is a 12 billion parameter experimental model created by RicardoEstep using Mergekit's Karcher-Mean method. It combines three prominent "uncensored roleplay models": DreadPoor/Krix-12B-Model_Stock (weighted at 0.4), yamatazen/EtherealAurora-12B-Lorablated (weighted at 0.3), and redrix/GodSlayer-12B-ABYSS (weighted at 0.3). The merge aims to produce a balanced model that is expressive, creative, and possesses a slightly dark, mythic undertone, while maintaining an uncensored nature.

Key Capabilities

  • Uncensored Roleplay: Designed to handle uncensored conversations and roleplay scenarios.
  • Creative Writing: Excels in narrative writing, worldbuilding, and generating dialogues with distinct personalities.
  • Stylized Prose: Capable of producing stylized and elegant text, benefiting from the EtherealAurora component.
  • Balanced Expression: Offers a blend of clean, general-purpose talk, creative elegance, and a "dark punch" from its constituent models.

Good For

  • Narrative writing and storytelling.
  • Worldbuilding and creating rich fictional settings.
  • Generating dialogues with unique character voices.
  • Creative tasks requiring both beauty and intensity in prose.
  • Roleplaying scenarios where uncensored responses are desired.

Limitations

  • Not Fine-tuned: The model is not fine-tuned, so some errors or drift may occur, especially with concrete factual requests.
  • Instability for Factual/Coding Tasks: It is not recommended for coding or factual tasks, as its strengths lie in creative generation.
  • Potential Dilution: The distinct signatures of Aurora and GodSlayer models might feel somewhat diluted in the merged output.