huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated

Warm
Public
7.6B
FP8
32768
1
Oct 6, 2024
License: apache-2.0
Hugging Face

The huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated is a 7.6 billion parameter instruction-tuned causal language model, derived from Qwen2.5-Coder-7B-Instruct. This model has been 'abliterated' to remove censorship, offering an uncensored version of the original Qwen2.5-Coder series. With a 131,072 token context length, it is designed for code-related tasks and general instruction following, providing an alternative for developers seeking less restricted model behavior.

Overview

Overview

This model, huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated, is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5-Coder-7B-Instruct architecture. It has been specifically modified using an "abliteration" technique to remove censorship, providing an uncensored variant of the original model. This approach aims to offer developers a less restricted tool for various applications, particularly those involving code generation and instruction following.

Key Characteristics

  • Uncensored Version: Modified from the original Qwen2.5-Coder-7B-Instruct using an abliteration technique.
  • Parameter Count: 7.6 billion parameters, part of a family of uncensored Qwen2.5-Coder models ranging from 0.5B to 32B.
  • Context Length: Supports a substantial context window of 131,072 tokens.
  • Performance: Evaluation results show comparable performance to the original Qwen2.5-Coder-7B-Instruct across benchmarks like MMLU Pro and BBH, with slight variations in IF_Eval, TruthfulQA, and GPQA.

Use Cases

This model is suitable for developers who require an instruction-following model with reduced censorship, especially for tasks that might be constrained by the content filters of standard models. Its coder-specific lineage suggests applicability in code generation, debugging, and other programming-related tasks where an uncensored output might be beneficial.