huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated
The huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated model is a 3 billion parameter, instruction-tuned, vision-language model based on the Qwen2.5-VL-3B-Instruct architecture. Developed by huihui-ai, this model has been 'abliterated' to remove refusal behaviors from its text generation capabilities, while retaining its multimodal image-to-text functionality. It is designed for applications requiring uncensored text outputs alongside visual understanding, supporting a 32768 token context length.
Loading preview...
Overview
This model, huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated, is an uncensored variant of the original Qwen/Qwen2.5-VL-3B-Instruct vision-language model. It leverages 'abliteration' techniques to remove refusal behaviors specifically from its text generation component, while its image processing capabilities remain unchanged. This 3 billion parameter model is instruction-tuned and supports a substantial 32768 token context length, making it suitable for complex multimodal tasks.
Key Capabilities
- Uncensored Text Generation: Modified to produce responses without refusal behaviors, offering greater flexibility for specific applications.
- Multimodal Understanding: Retains the ability to process and interpret image inputs alongside text, generating descriptive or analytical text outputs.
- Instruction Following: Designed to follow instructions effectively for various tasks.
- Deployment Flexibility: Provided with support for Ollama and GGUF formats, facilitating local deployment and integration.
Good For
- Developers requiring a vision-language model with unrestricted text output for research or specific applications.
- Use cases involving image description, visual question answering, or other multimodal tasks where the base Qwen2.5-VL-3B-Instruct capabilities are desired without inherent content moderation in text responses.
- Experimentation with abliteration techniques and their impact on model behavior.