mlabonne/gemma-3-1b-it-abliterated
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 20, 2025License:gemmaArchitecture:Transformer0.0K Warm

The mlabonne/gemma-3-1b-it-abliterated model is an uncensored version of Google's Gemma-3-1B-IT, created using a novel "abliteration" technique. This 1.1 billion parameter instruction-tuned model is specifically modified to reduce refusal behaviors while largely preserving its original capabilities. It is designed for applications requiring more permissive content generation, offering a high acceptance rate for diverse prompts.

Loading preview...

Gemma 3 1B IT Abliterated: Uncensored Instruction-Tuned Model

This model is an uncensored variant of the google/gemma-3-1b-it instruction-tuned large language model, developed by mlabonne. It leverages a new "abliteration" technique to significantly reduce refusal behaviors often found in base models.

Key Capabilities & Features

  • Uncensored Output: Modified to provide a high acceptance rate (>90%) for prompts that might otherwise be refused by standard instruction-tuned models.
  • Abliteration Technique: Utilizes a layerwise abliteration method, computing a refusal direction based on hidden states across most layers (3 to 45) and applying a refusal weight of 0.75.
  • Coherent Generation: Despite the experimental nature of the technique, the model aims to produce coherent outputs, though occasional minor text garbling (e.g., "It' my" instead of "It's my") may occur.
  • Resilience: Gemma 3 architecture demonstrated higher resilience to this abliteration process compared to other models like Qwen 2.5.

Recommended Usage

  • Generation Parameters: For optimal results, it is recommended to use temperature=1.0, top_k=64, and top_p=0.95.
  • Quantization: GGUF quantizations are available for efficient deployment and inference.