huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 22, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2 is a 7.6 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen2.5-7B-Instruct. This model has been modified using an 'abliteration' technique to remove censorship, making it suitable for applications requiring unfiltered responses. It maintains a substantial 131,072 token context length and shows competitive performance across various benchmarks, particularly excelling in IF_Eval.

Loading preview...

Overview

huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2 is an uncensored variant of the Qwen/Qwen2.5-7B-Instruct model, developed by huihui-ai. This 7.6 billion parameter model leverages an 'abliteration' technique, as detailed in a Hugging Face article, to modify its response filtering. It builds upon a previous iteration, offering improved performance.

Key Capabilities & Performance

This model is designed to provide instruction-following capabilities without the inherent censorship present in its base model. Evaluations show its performance across several benchmarks:

  • IF_Eval: Achieves 77.82, outperforming both the original Qwen2.5-7B-Instruct and the prior abliterated version.
  • MMLU Pro: Scores 42.03.
  • TruthfulQA: Scores 57.81.
  • BBH: Scores 53.01.
  • GPQA: Achieves 32.17, slightly surpassing the original model.

Usage Considerations

This model is particularly suited for use cases where unfiltered or uncensored responses are required, due to its abliterated nature. Developers can integrate it using the Hugging Face transformers library, with provided Python code examples for loading and interaction. The model supports a large context window of 131,072 tokens, enabling extensive conversational or document-based applications.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p