huihui-ai/Qwen2.5-3B-Instruct-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Nov 3, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The huihui-ai/Qwen2.5-3B-Instruct-abliterated model is a 3.1 billion parameter instruction-tuned causal language model, derived from Qwen's Qwen2.5-3B-Instruct. This version has been 'abliterated' to be uncensored, leveraging a specific technique to modify its behavior. It is designed for text generation tasks where an unfiltered response is desired, maintaining a 32768 token context length.

Loading preview...

Overview

huihui-ai/Qwen2.5-3B-Instruct-abliterated is a 3.1 billion parameter instruction-tuned language model based on the Qwen2.5-3B-Instruct architecture. Developed by huihui-ai, this model has undergone an "abliteration" process, a technique detailed in a Hugging Face article, to remove inherent censorship. It supports a substantial context length of 32768 tokens and is multilingual, supporting languages such as Chinese, English, French, Spanish, German, and many others.

Key Capabilities

  • Uncensored Text Generation: Specifically modified to provide unfiltered responses, distinguishing it from its base model.
  • Instruction Following: Designed to follow instructions for various text generation tasks.
  • Multilingual Support: Capable of processing and generating text in a wide array of languages.
  • Large Context Window: Features a 32768-token context length, allowing for processing longer inputs and maintaining conversational coherence over extended interactions.

Good For

  • Applications requiring an uncensored language model for creative or specific content generation.
  • Developers looking for a smaller, yet capable, instruction-tuned model with a broad language understanding.
  • Experimentation with abliterated models and their behavioral characteristics.