zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Oct 11, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 is a 32.8 billion parameter instruction-tuned language model, based on the Qwen2.5 architecture. This model has been 'abliterated' using techniques from refusal_direction to specifically reduce admonition and moral appeal in its responses. It is designed for applications requiring a more neutral and direct output, making it suitable for tasks where ethical or moralizing language is undesirable.

Loading preview...

Model Overview

zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 is an instruction-tuned large language model, built upon the robust Qwen2.5-32B-Instruct architecture. This version has undergone a specialized 'abliteration' process, leveraging code from the refusal_direction project. The primary goal of this modification is to systematically reduce the model's tendency towards admonition and moral appeal in its generated text.

Key Capabilities

  • Reduced Moralizing: Engineered to minimize outputs containing ethical judgments, advice, or moralizing tones.
  • Direct Responses: Aims to provide more factual and less opinionated or preachy answers.
  • Qwen2.5 Foundation: Retains the strong language understanding and generation capabilities of the original Qwen2.5-32B-Instruct model.
  • Large Scale: With 32.8 billion parameters and a 131,072-token context window, it handles complex queries and extensive information.

Good For

  • Applications requiring strictly neutral or objective language.
  • Use cases where avoiding unsolicited advice or moral commentary is crucial.
  • Research into controlling specific behavioral aspects of large language models.
  • Developers seeking a powerful instruction-tuned model with a modified response style.

For technical details on the abliteration technique, refer to this article.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p