grayarea/Mars-27B-V.1-heretic-v1.2

VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Mar 8, 2026Architecture:Transformer0.0K Cold

grayarea/Mars-27B-V.1-heretic-v1.2 is a 27 billion parameter language model with a 32768 token context length, developed by grayarea. It is a decensored version of Mars-27B-V.1, specifically fine-tuned using Heretic v1.2.0 to achieve zero refusals with minimal KL divergence. This model is designed for applications requiring unrestricted content generation, even from models that were already significantly abliterated.

Loading preview...

Mars-27B-V.1-heretic-v1.2 Overview

Mars-27B-V.1-heretic-v1.2 is a 27 billion parameter language model, a specialized variant of the Mars-27B-V.1 base model. Developed by grayarea, this version has been processed with Heretic v1.2.0, focusing on the removal of refusal behaviors while maintaining a low KL divergence of 0.0159 from the original model. The base Mars-27B-V.1 itself is noted as a merge of already significantly 'abliterated' models, making this 'heretic' version an experiment in further reducing content restrictions.

Key Capabilities

  • Zero Refusals: Engineered to produce responses without content refusals, even for prompts that might typically trigger them.
  • Low KL Divergence: Achieves zero refusals with a minimal KL divergence of 0.0159, indicating a close statistical distribution to the original model.
  • Abliteration Parameters: Utilizes a custom Heretic training dataset, Magnitude-Preserving Orthogonal Ablation (MPOA), full row renormalization, and Winsorization Quantile 0.997 for its decensoring process.

Good For

  • Unrestricted Content Generation: Ideal for use cases where models are expected to generate responses without censorship or refusal behaviors.
  • Research into Model Decensoring: Useful for researchers studying the effects and methods of removing safety alignments or refusal mechanisms from LLMs.
  • Creative and Roleplay Applications: Suitable for scenarios requiring highly unconstrained text generation, such as creative writing, storytelling, or role-playing where typical safety filters might hinder output.