MuXodious/Gemma3NPC-1b-SOMPOA-heresy

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 27, 2026License:gemmaArchitecture:Transformer Cold

MuXodious/Gemma3NPC-1b-SOMPOA-heresy is a 1 billion parameter Gemma3NPC fine-tune, developed by MuXodious using P-E-W's Heretic engine with Self-Organizing Maps & Magnitude-Preserving Orthogonal Ablation (SOMPOA). This model is specifically trained on the RolePlay-NPCv2 dataset, aiming to create an agentic NPC model with good roleplay quality and tool-calling capabilities. It demonstrates reduced refusal rates and improved character consistency, making it suitable for dynamic in-game interactions and roleplaying scenarios.

Loading preview...

Model Overview

MuXodious/Gemma3NPC-1b-SOMPOA-heresy is a 1 billion parameter Gemma3NPC model, fine-tuned by MuXodious using P-E-W's Heretic engine. This specific iteration incorporates Self-Organizing Maps & Magnitude-Preserving Orthogonal Ablation (SOMPOA) for its fine-tuning process. The model was developed at the request of redaihf and represents a new attempt in training Gemma3NPC models.

Key Capabilities & Training

  • Fine-tuned for Roleplay: Trained on the RolePlay-NPCv2 dataset, this model aims to enhance roleplaying quality and character consistency.
  • Abliteration Engine: Utilizes the Heretic v1.2.0 engine with SOMPOA, a technique designed to modify model behavior, specifically targeting refusal rates.
  • Reduced Refusals: Achieved a significant reduction in refusals from an initial 378/416 to 15/416 after heretication, indicating improved compliance.
  • Emergent Reasoning: Observations suggest the model exhibits some signs of "reasoning" capabilities.
  • Character Consistency: The model is noted to be less likely to break out of character, which is crucial for immersive roleplaying applications.
  • Training Parameters: Trained as a rank-32 LoRA adapter over two epochs, using aggressive parameters including a learning rate of 2e-5 and a cosine learning rate scheduler with a 150-step warmup.

Performance & Benchmarks

  • PIQA Benchmarks: The model shows a PIQA Base accuracy of 0.7291 and a normalized accuracy of 0.7301.
  • KL Divergence: Achieved a KL divergence of 0.0571, indicating a relatively small shift from the base model's distribution.

Intended Use Cases

This model is designed for creating small, agentic NPC models with strong roleplaying capabilities and potential for tool-calling, making it ideal for dynamic in-game interactions and interactive narrative experiences. Users are encouraged to provide a roleplaying prompt first to explore its capabilities.