huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 28, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
The huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated model is a 7.6 billion parameter instruction-tuned causal language model, derived from Qwen's Qwen2.5-7B-Instruct-1M. This model has been specifically modified using an abliteration technique to remove refusal behaviors, offering an uncensored response capability. It maintains a substantial 131,072 token context length, making it suitable for applications requiring extensive conversational memory or document processing without content restrictions. Its primary differentiator is the removal of refusal mechanisms, enabling direct and unfiltered responses.
Loading preview...