huihui-ai/Huihui-Qwen3-VL-2B-Thinking-abliterated

VISIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Oct 24, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The huihui-ai/Huihui-Qwen3-VL-2B-Thinking-abliterated is a 2 billion parameter multimodal model based on Qwen3-VL-2B-Thinking, developed by huihui-ai. This model has been specifically abliterated to remove refusal behaviors in its text generation, allowing it to describe or analyze images without stating limitations. It is designed for vision-language tasks where uncensored textual responses to image inputs are desired, offering a distinct alternative to its base model.

Loading preview...

Overview

This model, huihui-ai/Huihui-Qwen3-VL-2B-Thinking-abliterated, is a 2 billion parameter multimodal (vision-language) model derived from the Qwen3-VL-2B-Thinking architecture. Its primary distinction is the "abliteration" process applied to its text generation component, which effectively removes refusal behaviors. This means the model will no longer respond with phrases like "I can't describe or analyze this image," making it suitable for use cases requiring direct and unfiltered image analysis.

Key Capabilities

  • Uncensored Vision-Language Interaction: Directly describes and analyzes images without generating refusal statements.
  • Multimodal Processing: Handles both image and text inputs, generating textual outputs.
  • Qwen3-VL Base: Leverages the underlying capabilities of the Qwen3-VL-2B-Thinking model for visual understanding.

Usage Considerations

This model is explicitly designed with significantly reduced safety filtering. Users should be aware of the following:

  • Risk of Sensitive Outputs: It may generate sensitive, controversial, or inappropriate content due to limited content filtering.
  • Research and Experimental Use: Recommended for research, testing, or controlled environments rather than production or public-facing commercial applications.
  • User Responsibility: Users are solely responsible for ensuring compliance with legal and ethical standards, and for monitoring generated outputs.

Integration

It can be integrated using the Hugging Face transformers library and is also available for use with Ollama (version v0.12.7 or newer) via ollama run huihui_ai/qwen3-vl-abliterated:2b.