megabytes/Qwen2.5-0.5B-Instruct-heretic
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

megabytes/Qwen2.5-0.5B-Instruct-heretic is a 0.49 billion parameter instruction-tuned causal language model based on the Qwen2.5 architecture, developed by Qwen and subsequently decensored using the Heretic v1.2.0 tool. This model retains the Qwen2.5 improvements in knowledge, coding, mathematics, and instruction following, with a 32,768 token context length. Its primary differentiator is its significantly reduced refusal rate compared to the original Qwen2.5-0.5B-Instruct, making it suitable for use cases requiring less restrictive content generation.

Loading preview...