huihui-ai/Qwen2.5-32B-Instruct-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Sep 29, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

huihui-ai/Qwen2.5-32B-Instruct-abliterated is a 32.8 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen2.5-32B-Instruct. This model has been 'abliterated' to be uncensored, offering a broader range of responses compared to its base model. It supports a 131,072 token context length and is designed for general text generation tasks, particularly in scenarios requiring less restrictive content filtering across multiple languages.

Loading preview...

Overview

huihui-ai/Qwen2.5-32B-Instruct-abliterated is a 32.8 billion parameter instruction-tuned language model based on the Qwen2.5-32B-Instruct architecture. Its primary distinction is being an "abliterated" version, meaning it has undergone a process to remove inherent censorship, allowing for more open-ended and less restricted text generation. This model maintains the original Qwen2.5-32B-Instruct's extensive 131,072 token context window, making it suitable for processing and generating long-form content.

Key Capabilities

  • Uncensored Text Generation: Provides responses without the content restrictions typically found in moderated instruction-tuned models.
  • Multilingual Support: Capable of handling and generating text in numerous languages, including Chinese, English, French, Spanish, German, and more.
  • Large Context Window: Supports a 131,072 token context length, enabling complex conversations and detailed document processing.

Good For

  • Developers and researchers requiring a powerful, instruction-tuned model with fewer content restrictions.
  • Applications where creative or unfiltered text generation is desired.
  • Use cases involving extensive conversational history or long document analysis due to its large context window.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p