Umranz/raw-uncensored-qwen3-14b-heretic
Umranz/raw-uncensored-qwen3-14b-heretic is a 14.8 billion parameter causal language model, derived from Qwen/Qwen3-14B-Base and decensored using the Heretic v1.2.0 tool. It features a 32,768 token context length and is specifically engineered to reduce refusals, demonstrating a significant decrease from 99/100 to 4/100 compared to its base model. This model is primarily designed for applications requiring less restrictive content generation and broader response capabilities.
Loading preview...
Overview
Umranz/raw-uncensored-qwen3-14b-heretic is a 14.8 billion parameter large language model, built upon the Qwen/Qwen3-14B-Base architecture. Its primary distinction is being a decensored version, created using the Heretic v1.2.0 tool. This modification aims to broaden the model's response capabilities by significantly reducing content refusals.
Key Capabilities
- Reduced Refusals: Demonstrates a substantial reduction in refusal rates, from 99/100 in the original Qwen3-14B-Base to just 4/100 in this decensored version, as measured by KL divergence and refusal metrics.
- Qwen3 Base Features: Inherits the core advancements of the Qwen3 series, including pre-training on 36 trillion tokens across 119 languages, enhanced training techniques like global-batch load balancing and qk layernorm, and a three-stage pre-training process for improved reasoning and long-context comprehension.
- Extended Context Length: Supports a context length of 32,768 tokens, enabling processing of longer inputs and generating more extensive outputs.
Good for
- Applications requiring less restrictive content generation.
- Use cases where the base model's censorship or refusal rate is a limiting factor.
- Exploring broader conversational and creative outputs without inherent content filters.
- Developers interested in experimenting with modified LLM behaviors for specific research or application needs.