Overview
Overview
mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated is an 8 billion parameter instruction-tuned model derived from Meta's Llama 3.1 architecture. Its primary distinction is being an uncensored version, achieved through a technique called abliteration. This process modifies the model to remove certain content restrictions, offering greater flexibility in generated responses.
Key Characteristics
- Uncensored Output: Designed to provide less restricted content generation compared to its base model.
- Abliteration Technique: Created using a specific method to modify model behavior, as detailed in this article.
- Llama 3.1 Base: Leverages the foundational capabilities of the Meta Llama 3.1 8B Instruct model.
- Context Length: Supports a substantial context window of 32,768 tokens.
Performance Insights
Evaluations on the Open LLM Leaderboard show an average score of 23.13. Specific metrics include:
- IFEval (0-Shot): 73.29
- BBH (3-Shot): 27.13
- MMLU-PRO (5-shot): 27.81
Quantizations Available
Various quantized versions are provided for optimized deployment:
- New GGUF: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
- ZeroWw GGUF: ZeroWw/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
- EXL2: Apel-sin/llama-3.1-8B-abliterated-exl2
Use Cases
This model is particularly suited for applications where the default content moderation of standard instruction-tuned models is too restrictive, enabling more open-ended and less filtered text generation.