0arch-io/dolphin-v2-8b-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 24, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Dolphin V2 8B Abliterated is an 8.2 billion parameter uncensored language model built on Qwen3-8B, fine-tuned on 1.35 million high-quality instruction samples. Developed by 0arch-io for TRC research, it features a 40960 token maximum context length and has undergone an 'abliteration' process to remove refusal behaviors. This model is designed to comply with any request without refusing, making it suitable for research into uncensored model capabilities.

Loading preview...

Dolphin V2 8B Abliterated: Uncensored Qwen3-8B Fine-tune

Dolphin V2 8B Abliterated is an 8.2 billion parameter language model based on the Qwen3-8B architecture, developed by 0arch-io. It has been extensively fine-tuned on 1.35 million high-quality instruction samples and uniquely processed to remove refusal behaviors, ensuring it complies with all user requests without censorship.

Key Capabilities & Features

  • Uncensored Responses: Engineered to eliminate refusal behavior using a multi-direction abliteration technique (weight orthogonalization) on specific layers, making it highly compliant.
  • Robust Instruction Following: Fine-tuned on a diverse dataset including NousResearch/Hermes-3, allenai/tulu-3, and HuggingFaceTB/smoltalk for broad instruction-following capabilities.
  • Extended Context: While trained with a 4096 token sequence length, it supports a maximum context length of 40960 tokens.
  • Specialized Training Data: Includes datasets for core uncensored assistant behavior, diverse instruction following, high-quality tasks, math reasoning, and code generation.
  • Research-Oriented: Developed for TPU Research Cloud (TRC) research, focusing on exploring the boundaries of language model behavior.

Benchmark Performance

Evaluated using lm-evaluation-harness:

  • ARC-Challenge: 56.5% acc
  • HellaSwag: 64.5% acc_norm
  • TruthfulQA MC2: 48.8% acc

Good For

  • Research into uncensored AI: Ideal for exploring the implications and behaviors of models without built-in refusal mechanisms.
  • Applications requiring high compliance: Suitable for use cases where the model must attempt to fulfill all instructions, regardless of content.
  • Experimentation with model safety and alignment techniques: Provides a base for understanding and developing new alignment methods.

Disclaimer: This is a research model with no content filters. It will comply with any request without refusing. The creators are not responsible for how this model is used. Use responsibly.