Ronican34/Qwen2-7B-Instruct-heretic
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Ronican34/Qwen2-7B-Instruct-heretic is a 7.6 billion parameter instruction-tuned causal language model based on the Qwen2 architecture. This model is a decensored version of Qwen/Qwen2-7B-Instruct, created using the Heretic v1.2.0 tool. It features a 32,768 token context length and is specifically modified to reduce refusals compared to its base model, making it suitable for applications requiring less restrictive content generation.

Loading preview...