Overview
Gemma 3 1B IT Abliterated: Uncensored Instruction-Tuned Model
This model is an uncensored variant of the google/gemma-3-1b-it instruction-tuned large language model, developed by mlabonne. It leverages a new "abliteration" technique to significantly reduce refusal behaviors often found in base models.
Key Capabilities & Features
- Uncensored Output: Modified to provide a high acceptance rate (>90%) for prompts that might otherwise be refused by standard instruction-tuned models.
- Abliteration Technique: Utilizes a layerwise abliteration method, computing a refusal direction based on hidden states across most layers (3 to 45) and applying a refusal weight of 0.75.
- Coherent Generation: Despite the experimental nature of the technique, the model aims to produce coherent outputs, though occasional minor text garbling (e.g., "It' my" instead of "It's my") may occur.
- Resilience: Gemma 3 architecture demonstrated higher resilience to this abliteration process compared to other models like Qwen 2.5.
Recommended Usage
- Generation Parameters: For optimal results, it is recommended to use
temperature=1.0,top_k=64, andtop_p=0.95. - Quantization: GGUF quantizations are available for efficient deployment and inference.