huihui-ai/Qwen2.5-72B-Instruct-abliterated
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Oct 26, 2024License:qwenArchitecture:Transformer0.0K Warm

The huihui-ai/Qwen2.5-72B-Instruct-abliterated model is a 72.7 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen2.5-72B-Instruct. This version has been specifically modified using 'abliteration' techniques to remove refusal behaviors, offering an uncensored response capability. It is designed for applications requiring direct answers without built-in content restrictions, making it suitable for research into refusal removal and specific use cases where unconstrained output is desired.

Loading preview...

Model Overview

This model, huihui-ai/Qwen2.5-72B-Instruct-abliterated, is a 72.7 billion parameter instruction-tuned large language model. It is a modified version of the original Qwen/Qwen2.5-72B-Instruct developed by Qwen, with a key distinction: it has undergone an "abliteration" process to remove refusal behaviors.

Key Characteristics

  • Uncensored Responses: The primary feature is the removal of refusal mechanisms, allowing the model to provide direct answers without built-in content restrictions.
  • Proof-of-Concept: This implementation serves as a proof-of-concept for removing refusals from LLMs without relying on TransformerLens, utilizing techniques detailed in remove-refusals-with-transformers.
  • Hugging Face Integration: Easily loadable and usable with the transformers library for inference.
  • Ollama Support: Available for direct use via Ollama as huihui_ai/qwen2.5-abliterate:72b.

Potential Use Cases

This model is particularly suited for:

  • Research into LLM Safety and Alignment: Studying the effects and methods of removing refusal behaviors.
  • Applications requiring unconstrained output: For specific scenarios where a model's inherent refusal to answer certain prompts is undesirable.

Limitations

As an 'abliterated' model, users should be aware that it will not refuse prompts that the base model might have, potentially generating content that could be considered unsafe or inappropriate in other contexts.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p