huihui-ai/Qwen2.5-7B-Instruct-abliterated

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face
Overview

Overview

This model, huihui-ai/Qwen2.5-7B-Instruct-abliterated, is a 7.6 billion parameter instruction-tuned language model derived from the Qwen2.5-7B-Instruct base model. Its primary distinction is being an uncensored version, achieved through an "abliteration" technique. This modification aims to provide a model with fewer content restrictions compared to its base counterpart.

Key Characteristics & Performance

  • Uncensored Output: Utilizes an abliteration process to reduce inherent content moderation, offering more flexibility in generated text.
  • Multilingual Support: Capable of processing and generating text in numerous languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
  • Context Length: Inherits the substantial 131072 token context window from the Qwen2.5 architecture.
  • Evaluations: While generally maintaining performance, some benchmarks show slight variations compared to the original Qwen2.5-7B-Instruct:
    • IF_Eval: Slightly improved to 76.49 (from 76.44).
    • TruthfulQA: Improved to 64.92 (from 62.46).
    • MMLU Pro: Decreased to 41.71 (from 43.12).
    • BBH: Decreased to 52.77 (from 53.92).
    • GPQA: Slightly improved to 31.97 (from 31.91).

Use Cases

This model is suitable for applications where the base Qwen2.5-7B-Instruct would be used, but with a specific requirement for less restrictive content generation. Developers seeking a powerful, instruction-following model with a large context window and reduced censorship will find this model particularly useful. A newer version, Qwen2.5-7B-Instruct-abliterated-v2, is also available and recommended for updated performance.