p-e-w/Qwen3-4B-Instruct-2507-heretic-v4
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 14, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The p-e-w/Qwen3-4B-Instruct-2507-heretic-v4 is a 4 billion parameter instruction-tuned causal language model, based on the Qwen3-4B-Instruct-2507 architecture by Qwen, with a native context length of 262,144 tokens. This version has been decensored using the Heretic tool, significantly reducing refusals from 100/100 to 9/100 while maintaining the original model's enhanced capabilities in instruction following, logical reasoning, and long-context understanding. It is optimized for general-purpose conversational AI where broader response flexibility is desired.

Loading preview...