huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Oct 1, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated is a 1.5 billion parameter instruction-tuned causal language model, derived from Qwen2.5-Coder-1.5B-Instruct. This model has been 'abliterated' to remove censorship, offering an uncensored version of the original Qwen2.5-Coder series. It maintains a 131072 token context length and is primarily designed for coding-related tasks, with improved performance on IF_Eval benchmarks compared to its censored counterpart.

Loading preview...

Model Overview

This model, huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated, is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5-Coder architecture. It is an uncensored variant of the original Qwen2.5-Coder-1.5B-Instruct, created using an 'abliteration' technique to remove content restrictions. The model supports a substantial context length of 131072 tokens.

Key Characteristics & Performance

  • Uncensored Version: Provides an unrestricted response capability compared to its base model.
  • Coder-focused: Part of the Qwen2.5-Coder series, indicating an optimization for code-related tasks.
  • Improved IF_Eval Score: Achieves 45.41 on the IF_Eval benchmark, outperforming the original Qwen2.5-Coder-1.5B-Instruct (43.43).
  • Context Length: Features a large context window of 131072 tokens, beneficial for handling extensive codebases or long conversations.
  • Available Sizes: This 1.5B parameter model is one of six uncensored Qwen2.5-Coder versions, ranging from 0.5B to 32B parameters.

Usage Considerations

This model is suitable for developers seeking an uncensored coding assistant or for applications where content filtering might hinder specific use cases. While it shows an improvement in IF_Eval, other benchmarks like MMLU Pro, TruthfulQA, BBH, and GPQA show slight decreases compared to the original, suggesting a trade-off in general knowledge or reasoning for the uncensored nature and specific coding task improvement. Users can integrate it via Hugging Face's transformers library or use its Ollama distribution.