mlabonne/gemma-3-27b-it-abliterated
Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Mar 16, 2025License:gemmaArchitecture:Transformer0.3K Warm

The mlabonne/gemma-3-27b-it-abliterated model is a 27 billion parameter instruction-tuned causal language model based on Google's Gemma 3 architecture. This model has been specifically modified using an "abliteration" technique to reduce refusals and censorship, aiming for a higher acceptance rate in responses. It is designed for use cases requiring less restrictive content generation while preserving coherence. The model has a context length of 32768 tokens.

Loading preview...

Overview

mlabonne/gemma-3-27b-it-abliterated is an uncensored variant of the google/gemma-3-27b-it model, developed by mlabonne. It leverages a novel "abliteration" technique to significantly reduce model refusals and censorship, aiming to provide more open-ended and less restricted responses. This experimental modification focuses on maintaining output coherence while increasing the acceptance rate of various prompts.

Key Capabilities

  • Reduced Censorship: Engineered to be less prone to refusals compared to its base model, achieving a reported acceptance rate exceeding 90%.
  • Coherent Output: Despite the abliteration, the model is designed to produce coherent and understandable text.
  • Experimental Abliteration: Utilizes a layerwise abliteration method, computing a refusal direction based on hidden states for each layer independently, combined with a refusal weight of 1.5.

Good For

  • Unrestricted Content Generation: Ideal for applications where a higher tolerance for diverse and potentially sensitive content is required.
  • Exploratory AI Research: Suitable for researchers and developers experimenting with model safety, bias, and censorship mitigation techniques.
  • Creative and Roleplay Scenarios: Can be used in contexts that benefit from fewer content restrictions, such as creative writing or role-playing applications.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p