limloop/MN-12B-Hydra-RP-RU

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 2, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The limloop/MN-12B-Hydra-RP-RU is a 12 billion parameter experimental TIES merge model based on Mistral Nemo 12B, featuring a 32768 token context length. Developed by limloop, it is specifically optimized for advanced roleplay capabilities and deep literary Russian language fluency, while also exhibiting uncensored behavior. This model excels at maintaining character consistency and narrative depth in Russian, making it suitable for creative writing and interactive storytelling applications where explicit content may be desired.

Loading preview...

MN-12B-Hydra-RP-RU: Roleplay and Russian Language Focused Merge

MN-12B-Hydra-RP-RU is a 12 billion parameter experimental merge model built upon the Mistral Nemo 12B architecture. Developed by limloop, this model leverages the TIES merging method to combine the strengths of several fine-tuned models, resulting in a unique blend of capabilities.

Key Capabilities

  • Advanced Roleplay: Exhibits strong character consistency and narrative depth, making it highly effective for interactive storytelling and roleplaying scenarios.
  • Deep Russian Language Fluency: Optimized for literary Russian, drawing inspiration from models tuned for Dostoevsky-style language.
  • Uncensored Behavior: Incorporates components designed to reduce safety filtering, allowing for the generation of explicit or controversial content.
  • Instruction Following: Demonstrates reliable adherence to user prompts and instructions.
  • Multilingual Support: Primarily focused on Russian, but also supports English.
  • Tool Calling: Retains the base Mistral Nemo capabilities for tool use.

Good for

  • Creative Writing in Russian: Ideal for generating rich, detailed narratives and dialogues in Russian.
  • Roleplaying Applications: Suited for scenarios requiring consistent character portrayal and immersive storytelling.
  • Unfiltered Content Generation: Useful for applications where reduced censorship is a specific requirement, such as certain forms of creative expression or research.
  • Experimental LLM Development: Provides a strong base for further fine-tuning or research into merged model behaviors, particularly in the context of roleplay and specific language nuances.