huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 28, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated model is a 7.6 billion parameter instruction-tuned causal language model, derived from Qwen's Qwen2.5-7B-Instruct-1M. This model has been specifically modified using an abliteration technique to remove refusal behaviors, offering an uncensored response capability. It maintains a substantial 131,072 token context length, making it suitable for applications requiring extensive conversational memory or document processing without content restrictions. Its primary differentiator is the removal of refusal mechanisms, enabling direct and unfiltered responses.

Loading preview...

Model Overview

The huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5-7B-Instruct-1M architecture. Its core distinction lies in the application of an "abliteration" technique, specifically designed to remove refusal behaviors from the model's responses. This modification aims to provide an uncensored version of the original Qwen model.

Key Capabilities

  • Uncensored Responses: Modified to bypass typical refusal mechanisms, allowing for direct answers to a broader range of prompts.
  • Instruction Following: Retains the instruction-following capabilities of the base Qwen2.5-7B-Instruct-1M model.
  • Large Context Window: Supports a context length of 131,072 tokens, enabling processing of extensive inputs and maintaining long-form conversations.

Unique Differentiator

This model's primary innovation is its abliteration process, a proof-of-concept implementation to remove refusals without relying on TransformerLens. This makes it distinct from other instruction-tuned models by offering a more permissive response generation.

Integration

Users can readily deploy this model via Ollama using the huihui_ai/qwen2.5-1m-abliterated tag, simplifying local execution and experimentation with its uncensored capabilities.