MagicalAlchemist/Qwen3-1.7B-Magic_decensored
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jan 22, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

MagicalAlchemist/Qwen3-1.7B-Magic_decensored is a 1.7 billion parameter causal language model, a decensored version of Qwen/Qwen3-1.7B created using Heretic v1.1.0. This model features a 40960 token context length and is specifically modified to reduce refusals, offering enhanced flexibility for various applications. It retains Qwen3's core capabilities in reasoning, instruction-following, agent tasks, and multilingual support, with a unique dual 'thinking' and 'non-thinking' mode for optimized performance.

Loading preview...

Overview

MagicalAlchemist/Qwen3-1.7B-Magic_decensored is a 1.7 billion parameter causal language model, derived from Qwen/Qwen3-1.7B and decensored using Heretic v1.1.0. This modification significantly reduces the model's refusal rate from 65/100 to 45/100, making it more permissive for diverse use cases. It maintains the original Qwen3 architecture, offering a substantial 40960 token context length and advanced capabilities in reasoning, instruction-following, and agentic tasks.

Key Capabilities

  • Decensored Behavior: Achieves a lower refusal rate (45/100) compared to the original model (65/100).
  • Dual Thinking Modes: Supports seamless switching between a 'thinking mode' for complex logical reasoning, math, and coding, and a 'non-thinking mode' for efficient general-purpose dialogue.
  • Enhanced Reasoning: Demonstrates improved performance in mathematics, code generation, and commonsense logical reasoning.
  • Superior Human Preference Alignment: Excels in creative writing, role-playing, multi-turn dialogues, and instruction following.
  • Agentic Expertise: Integrates precisely with external tools, achieving leading performance in complex agent-based tasks among open-source models.
  • Multilingual Support: Capable of handling over 100 languages and dialects with strong multilingual instruction following and translation abilities.

Good For

  • Applications requiring a more permissive language model with reduced content restrictions.
  • Tasks demanding advanced logical reasoning, mathematical problem-solving, and code generation.
  • Creative writing, role-playing, and engaging multi-turn conversational agents.
  • Complex agentic workflows and tool integration.
  • Multilingual applications, instruction following, and translation across numerous languages.