huihui-ai/Huihui-Qwen3-14B-abliterated-v2
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jun 17, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Huihui-Qwen3-14B-abliterated-v2 is a 14 billion parameter Qwen3-based causal language model developed by huihui-ai, featuring a 32768-token context length. This model is specifically engineered to be an uncensored version of Qwen3-14B through an abliteration process, aiming to remove refusal behaviors. It is primarily designed for research and experimental use where reduced safety filtering is desired, offering a proof-of-concept for refusal removal techniques.

Loading preview...

Overview

Huihui-Qwen3-14B-abliterated-v2 is a 14 billion parameter language model based on the Qwen3 architecture, developed by huihui-ai. This model is an uncensored variant of Qwen/Qwen3-14B, created using an "abliteration" method to remove refusal behaviors. It represents an improved iteration over its predecessor, huihui-ai/Qwen3-14B-abliterated, with a faster abliteration technique yielding better results and addressing issues like garbled codes by changing the candidate layer.

Key Capabilities

  • Uncensored Output: Significantly reduced safety filtering compared to standard models, allowing for a broader range of generated content.
  • Abliteration Technique: Utilizes a novel and faster method for removing refusals from LLMs, serving as a proof-of-concept without relying on TransformerLens.
  • Improved Performance: This version offers enhancements over previous abliterated models, including better results and stability.
  • Ollama Support: Directly available for use via Ollama, with a toggle for "thinking" mode.

Usage Warnings & Considerations

  • Risk of Sensitive Content: Due to minimal safety filtering, the model may generate sensitive, controversial, or inappropriate outputs. Users must exercise caution and review content rigorously.
  • Not for All Audiences: Outputs may be unsuitable for public settings, underage users, or applications requiring high security.
  • Legal and Ethical Responsibility: Users are solely responsible for ensuring compliance with local laws and ethical standards for generated content.
  • Research Use Recommended: Best suited for research, testing, or controlled environments, rather than production or public-facing commercial applications.
  • No Default Safety Guarantees: huihui.ai disclaims responsibility for consequences arising from the model's use, as it has not undergone rigorous safety optimization.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p