Overview
Overview
This model, zetasepic/Qwen2.5-72B-Instruct-abliterated-v2, is a specialized version of the 72.7 billion parameter Qwen2.5-72B-Instruct. It has undergone a process known as "abliteration," which modifies its behavior to reduce tendencies towards admonition and moralizing in its generated text. This technique leverages code from the refusal_direction project.
Key Characteristics
- Base Model: Qwen/Qwen2.5-72B-Instruct, a powerful instruction-tuned large language model.
- Parameter Count: 72.7 billion parameters, indicating a high capacity for complex language understanding and generation.
- Context Length: Supports a substantial context window of 131,072 tokens.
- Abliteration Technique: Modified to specifically minimize outputs that contain admonitions or moral appeals, aiming for more neutral or direct responses.
- Origin: The abliteration process is detailed in an article on Hugging Face and was influenced by work from @FailSpy.
Use Cases
This model is particularly suited for applications where the user requires responses that are:
- Direct and Factual: Less likely to include unsolicited advice or moral judgments.
- Neutral in Tone: Designed to avoid preachy or admonishing language often present in general-purpose instruction-tuned models.
For users interested in local deployment, a GGUF version of this model is also available.