huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated

Warm
Public
14.8B
FP8
131072
Jan 28, 2025
License: apache-2.0
Hugging Face
Overview

Overview

huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated is a 14.8 billion parameter instruction-tuned model based on the Qwen2.5-14B-Instruct-1M architecture. Its primary distinction lies in its "abliterated" nature, meaning it has undergone a process to remove refusal mechanisms, resulting in an uncensored output. This modification was implemented as a proof-of-concept using techniques described in the remove-refusals-with-transformers project, without relying on TransformerLens.

Key Capabilities

  • Uncensored Output: Designed to generate responses without typical refusal behaviors found in many instruction-tuned models.
  • Instruction Following: Retains the instruction-following capabilities of its base Qwen2.5-14B-Instruct-1M model.
  • Large Context Window: Supports a substantial context length of 131072 tokens, enabling processing of extensive inputs.

Good For

  • Research into Model Refusal Mechanisms: Useful for studying the effects and removal of refusal behaviors in large language models.
  • Applications Requiring Unfiltered Responses: Suitable for use cases where direct and uncensored language generation is a specific requirement.
  • Ollama Integration: Directly available for use with Ollama, simplifying deployment for local inference via ollama run huihui_ai/qwen2.5-1m-abliterated:14b.