Dolphin V2 8B Abliterated: Uncensored Qwen3-8B Fine-tune
Dolphin V2 8B Abliterated is an 8.2 billion parameter language model based on the Qwen3-8B architecture, developed by 0arch-io. It has been extensively fine-tuned on 1.35 million high-quality instruction samples and uniquely processed to remove refusal behaviors, ensuring it complies with all user requests without censorship.
Key Capabilities & Features
- Uncensored Responses: Engineered to eliminate refusal behavior using a multi-direction abliteration technique (weight orthogonalization) on specific layers, making it highly compliant.
- Robust Instruction Following: Fine-tuned on a diverse dataset including NousResearch/Hermes-3, allenai/tulu-3, and HuggingFaceTB/smoltalk for broad instruction-following capabilities.
- Extended Context: While trained with a 4096 token sequence length, it supports a maximum context length of 40960 tokens.
- Specialized Training Data: Includes datasets for core uncensored assistant behavior, diverse instruction following, high-quality tasks, math reasoning, and code generation.
- Research-Oriented: Developed for TPU Research Cloud (TRC) research, focusing on exploring the boundaries of language model behavior.
Benchmark Performance
Evaluated using lm-evaluation-harness:
- ARC-Challenge: 56.5% acc
- HellaSwag: 64.5% acc_norm
- TruthfulQA MC2: 48.8% acc
Good For
- Research into uncensored AI: Ideal for exploring the implications and behaviors of models without built-in refusal mechanisms.
- Applications requiring high compliance: Suitable for use cases where the model must attempt to fulfill all instructions, regardless of content.
- Experimentation with model safety and alignment techniques: Provides a base for understanding and developing new alignment methods.
Disclaimer: This is a research model with no content filters. It will comply with any request without refusing. The creators are not responsible for how this model is used. Use responsibly.