p-e-w/Qwen3-4B-Instruct-2507-heretic-v3-quantized-processing
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 14, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The p-e-w/Qwen3-4B-Instruct-2507-heretic-v3-quantized-processing model is a 4 billion parameter instruction-tuned causal language model, based on the Qwen3-4B-Instruct-2507 architecture, with a native context length of 262,144 tokens. This version is a decensored variant created using Heretic v1.1.0, specifically modified to reduce refusals while maintaining general capabilities. It excels in instruction following, logical reasoning, text comprehension, mathematics, science, coding, and tool usage, making it suitable for applications requiring less restrictive content generation.

Loading preview...