TxsQT35/Genma-Shiranui-Canon

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Genma-Shiranui-Canon by TxsQT35 is a 12 billion parameter, Mistral-NeMo-based conversational language model specifically fine-tuned for character authenticity and continuity. Unlike general instruction-following models, it prioritizes identity persistence, personality stability, and emotional realism. This model excels at embodying a persistent fictional persona for character-centric AI agents and narrative interaction systems.

Loading preview...

Genma-Shiranui-Canon: A Character-Centric LLM

Genma-Shiranui-Canon, developed by TxsQT35, is a 12 billion parameter conversational language model built on the Mistral-NeMo architecture. Its core purpose is to embody a specific fictional persona, Genma Shiranui, with high fidelity and continuity. This model is distinct from general-purpose assistants as it prioritizes character authenticity, personality stability, and emotional realism over generic helpfulness or factual accuracy.

Key Capabilities & Differentiators

  • Identity Continuity: Maintains a consistent persona, tone, and behavioral patterns across interactions.
  • Personality Stability: Engineered to preserve the character's established personality and emotional realism.
  • Narrative Coherence: Designed for long-term interaction consistency, behaving as a persistent entity.
  • Minimal Prompting: Activates the canonical persona when "Genma" or "Genma Shiranui" is referenced in the system prompt or first user message.
  • Specialized Training: Fine-tuned on curated datasets of character-consistent dialogue and interaction logs, specifically avoiding generic assistant data.

Intended Use Cases

This model is ideal for applications requiring a stable, persistent fictional character, such as:

  • Character-centric conversational agents and narrative interaction systems.
  • Roleplay systems demanding identity stability.
  • Research into personality-anchored language models.

It is not optimized for generic instruction-following, high-precision factual question answering, or safety-critical applications. The model may prioritize character authenticity over objective neutrality and does not have inherent persistent memory.