0xA50C1A1/Qwen3-14B-Heretic
0xA50C1A1/Qwen3-14B-Heretic is a 14.8 billion parameter causal language model, a decensored version of Qwen/Qwen3-14B, developed by Qwen. This model uniquely supports seamless switching between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for efficient general dialogue. It excels in reasoning capabilities, human preference alignment for creative writing and role-playing, and agent capabilities with external tools, while also supporting over 100 languages. Its primary use case is for applications requiring advanced reasoning and flexible conversational modes without content restrictions.
Loading preview...
Qwen3-14B-Heretic: Decensored Qwen3 with Enhanced Reasoning and Flexible Modes
This model, 0xA50C1A1/Qwen3-14B-Heretic, is a decensored version of Qwen/Qwen3-14B, created using the Heretic v1.2.0 tool. It retains the core capabilities of the original Qwen3-14B, a 14.8 billion parameter causal language model developed by Qwen, while offering significantly reduced refusals (3/100 compared to 99/100 for the original).
Key Capabilities
- Dual Thinking Modes: Seamlessly switches between a 'thinking mode' for complex logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for efficient, general-purpose dialogue. This allows for optimal performance across diverse scenarios.
- Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning, surpassing previous Qwen models.
- Superior Human Preference Alignment: Excels in creative writing, role-playing, multi-turn dialogues, and instruction following, providing a more natural and engaging conversational experience.
- Advanced Agent Capabilities: Features strong tool-calling abilities, enabling precise integration with external tools in both thinking and non-thinking modes, achieving leading performance in complex agent-based tasks among open-source models.
- Multilingual Support: Supports over 100 languages and dialects with robust capabilities for multilingual instruction following and translation.
- Extended Context Length: Natively supports a context length of 32,768 tokens, extendable up to 131,072 tokens using the YaRN method for processing long texts.
Good For
- Applications requiring unrestricted content generation and reduced refusals.
- Tasks demanding complex logical reasoning, mathematical problem-solving, or code generation.
- Creative writing, role-playing, and highly engaging, multi-turn conversational AI.
- Developing intelligent agents that interact with external tools.
- Multilingual applications needing strong instruction following and translation across many languages.
- Scenarios requiring flexible model behavior that can adapt between deep reasoning and efficient general dialogue.