3uer/Qwen2.5-7B-Instruct-abliterated-v3

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 31, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

3uer/Qwen2.5-7B-Instruct-abliterated-v3 is a 7.6 billion parameter instruction-tuned causal language model, based on Qwen/Qwen2.5-7B-Instruct, developed by huihui-ai. This model is specifically modified to be an uncensored version through an 'abliteration' process, aiming to remove refusal behaviors. It supports a context length of 32768 tokens and is primarily designed for applications requiring less restrictive content generation, although its test results indicate some performance degradation compared to the base model.

Loading preview...

Overview

This model, huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3, is an uncensored variant of the Qwen/Qwen2.5-7B-Instruct base model. It was created using an "abliteration" technique, a proof-of-concept implementation designed to remove refusal behaviors from the LLM without relying on TransformerLens. While the goal is to reduce refusals, the developers note that test results are not significantly improved, and there was a previous issue with "garbled text" in an earlier version.

Key Capabilities

  • Uncensored Content Generation: Designed to produce responses without typical refusal behaviors found in instruction-tuned models.
  • Multilingual Support: Inherits multilingual capabilities from its base model, supporting languages such as Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
  • Integration with Ollama: Directly available for use with Ollama via huihui_ai/qwen2.5-abliterate.

Good for

  • Experimental Use Cases: Ideal for researchers and developers exploring methods to modify LLM behavior, specifically in removing refusal mechanisms.
  • Applications Requiring Less Content Restriction: Suitable for scenarios where the base model's inherent content restrictions are undesirable, provided the user understands the potential trade-offs in other performance metrics.
  • Prototyping Abliteration Techniques: Serves as a practical example for understanding and testing the abliteration process described in the remove-refusals-with-transformers project.