huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated
The huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated model is a 7 billion parameter multimodal instruction-tuned model based on Qwen/Qwen2.5-VL-7B-Instruct. Developed by huihui-ai, this version has undergone "abliteration" to remove refusal behaviors from its text generation capabilities, while retaining its original vision processing. It is primarily designed for applications requiring an uncensored, vision-capable language model with a 32768 token context length.
Loading preview...
huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated Overview
This model is an uncensored, multimodal instruction-tuned language model with 7 billion parameters, derived from the original Qwen/Qwen2.5-VL-7B-Instruct. The key differentiator is its "abliterated" text generation component, which has been specifically processed to remove refusal behaviors, making it suitable for a wider range of applications where unconstrained responses are desired. It maintains the original model's vision capabilities, allowing it to process and understand image inputs alongside text.
Key Capabilities
- Multimodal Understanding: Processes both image and text inputs to generate responses.
- Uncensored Text Generation: Abliteration technique applied to remove refusal tendencies in text outputs.
- Instruction Following: Capable of following instructions for various tasks.
- Large Context Window: Supports a context length of 32768 tokens.
Good For
- Applications requiring a vision-language model that provides direct, unfiltered responses without built-in refusals.
- Research and development into model safety and control, particularly in understanding and modifying refusal behaviors.
- Use cases where the original Qwen2.5-VL-7B-Instruct's vision capabilities are needed, but with modified conversational guardrails.
- Integration into systems via Hugging Face's
transformerslibrary or directly through Ollama.