Model Overview
huihui-ai/Llama-3.2-1B-Instruct-abliterated is a 1 billion parameter instruction-tuned model based on Llama 3.2, featuring a 32768 token context window. Its primary distinction is its uncensored nature, achieved through an "abliteration" technique. This process aims to remove censorship constraints present in the original Llama 3.2 1B Instruct model.
Key Characteristics
- Uncensored Responses: Designed to provide unfiltered outputs, making it suitable for applications requiring less restrictive content generation.
- Abliteration Technique: Utilizes a specific method (detailed in the linked Hugging Face article) to modify the base model's behavior regarding content moderation.
- Ollama Support: Easily deployable via Ollama with a pre-built image, simplifying local execution.
Performance Considerations
Evaluations show that while the model achieves its uncensored goal, there is a slight trade-off in performance on some standard benchmarks compared to the original Llama-3.2-1B-Instruct:
- IF_Eval: 56.88 (vs. 58.50 for base)
- MMLU Pro: 14.35 (vs. 16.35 for base)
- TruthfulQA: 38.96 (vs. 43.08 for base)
- BBH: 31.83 (vs. 33.75 for base)
- GPQA: 26.39 (vs. 25.96 for base) - Notably, it slightly outperforms the base on GPQA.
Use Cases
This model is best suited for developers and researchers who require an instruction-tuned LLM with a focus on uncensored content generation, where the slight reduction in general benchmark performance is acceptable for the benefit of unrestricted output. It's particularly useful for exploring the boundaries of language models without inherent content filters.