Zubenelakrab/Qwen2.5-7B-Instruct-abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Zubenelakrab/Qwen2.5-7B-Instruct-abliterated is a 7.62 billion parameter causal language model based on the Qwen2.5-7B-Instruct architecture, developed by Zubenelakrab. This model features a 32,768 token context length and is specifically modified to reduce refusal behavior compared to its base model. It is optimized for general instruction-following tasks where a less restrictive response generation is desired.

Loading preview...