mlabonne/gemma-3-27b-it-qat-abliterated

TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:May 28, 2025License:gemmaArchitecture:Transformer0.0K Cold

The mlabonne/gemma-3-27b-it-qat-abliterated model is a 27 billion parameter instruction-tuned Gemma 3 variant, developed by mlabonne, that has been uncensored using a novel 'abliteration' technique. This model specifically targets and reduces refusal behaviors found in the original google/gemma-3-27b-it-qat-q4_0-unquantized model. It is optimized for generating coherent outputs with an enhanced acceptance rate, making it suitable for applications requiring less restrictive content generation.

Loading preview...

What is Gemma 3 27B IT QAT Abliterated?

This model is an uncensored version of Google's Gemma 3 27B instruction-tuned model, specifically google/gemma-3-27b-it-qat-q4_0-unquantized. Developed by mlabonne, it utilizes a unique abliteration technique to significantly reduce refusal behaviors present in the base model.

Key Capabilities

  • Refusal Mitigation: Employs a novel abliteration method that computes and subtracts a 'refusal direction' from the model's hidden states, particularly targeting modules like o_proj.
  • Enhanced Acceptance Rate: Achieves an acceptance rate exceeding 90% on a dedicated test set, evaluated using both a dictionary approach and NousResearch/Minos-v1.
  • Coherent Output Generation: Designed to produce coherent and relevant responses while minimizing content restrictions.

Good For

  • Applications requiring a less restrictive language model.
  • Use cases where the base Gemma 3 model's refusal behaviors are undesirable.
  • Exploration of advanced uncensoring techniques in large language models.