mlabonne/Daredevil-8B-abliterated
mlabonne/Daredevil-8B-abliterated is an 8 billion parameter uncensored language model based on mlabonne/Daredevil-8B, developed using failspy's notebook. It leverages a technique to mediate refusal in LLMs, making it suitable for applications not requiring alignment. This model is optimized for role-playing and similar use cases, demonstrating strong performance on the Open LLM Leaderboard for its size class.
Loading preview...
Overview
mlabonne/Daredevil-8B-abliterated is an 8 billion parameter language model derived from mlabonne/Daredevil-8B. It was created using a technique described in the blog post "Refusal in LLMs is mediated by a single direction," which aims to remove refusal behaviors from the model. This makes it an uncensored model, suitable for a broader range of applications where alignment is not a primary concern.
Key Characteristics
- Uncensored Nature: Designed for applications that do not require strict alignment, such as role-playing scenarios.
- Performance: On May 27, 2024, it was noted as the second best-performing 8B model on the Open LLM Leaderboard based on MMLU score.
- Evaluation: While uncensored, its performance on general benchmarks like AGIEval, GPT4All, TruthfulQA, and Bigbench remains competitive, closely trailing its base model, Daredevil-8B.
Use Cases
- Role-playing: Its uncensored nature makes it particularly well-suited for interactive and creative role-playing applications.
- Unrestricted Content Generation: Ideal for scenarios where the model needs to generate responses without inherent refusal mechanisms, provided ethical guidelines are followed by the user.
Usage
The model can be easily integrated into Python projects using the transformers library, with examples provided for text generation.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.