Oxte-Pech1/Daredevil-8B-abliterated

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 11, 2026License:otherArchitecture:Transformer Cold

Oxte-Pech1/Daredevil-8B-abliterated is an 8 billion parameter language model derived from mlabonne/Daredevil-8B, specifically modified to be uncensored. It utilizes a technique to reduce refusal behaviors in LLMs, making it suitable for applications like role-playing that do not require strict alignment. With an 8192 token context length, it offers enhanced flexibility for various generative tasks. This model is noted for its performance on the Open LLM Leaderboard, ranking as the second best-performing 8B model by MMLU score as of May 2024.

Loading preview...

Oxte-Pech1/Daredevil-8B-abliterated Overview

This model is an abliterated version of the mlabonne/Daredevil-8B model, developed using a technique described in the blog post "Refusal in LLMs is mediated by a single direction". The primary modification makes it an uncensored model, removing refusal behaviors often present in aligned LLMs.

Key Capabilities & Features

  • Uncensored Output: Designed to generate responses without typical refusal mechanisms, enabling broader content generation.
  • 8 Billion Parameters: A compact yet capable model size for various applications.
  • 8192 Token Context Length: Supports longer inputs and more extensive conversational turns.
  • Strong Performance: Ranked as the second best-performing 8B model on the Open LLM Leaderboard by MMLU score (as of May 2024).
  • Quantization Available: GGUF quantizations are provided for efficient deployment.

Good For

  • Role-playing: Ideal for scenarios requiring creative and unrestricted dialogue.
  • Applications without Alignment Needs: Suitable for use cases where strict ethical alignment or refusal of certain prompts is not desired or is counterproductive.
  • Exploration of Uncensored LLM Behavior: Useful for researchers and developers studying the effects of refusal mediation in language models.