ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic is a 4 billion parameter instruction-tuned conversational language model based on Qwen3-4B-Instruct, developed by ChiKoi7. This model has been specifically decensored using the Heretic tool, significantly reducing refusal rates in both English and Chinese compared to its original counterpart. It is optimized for generating less restricted responses in dual-language chat scenarios, making it suitable for applications requiring uncensored conversational AI.
Loading preview...
Model Overview
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic is a 4 billion parameter instruction-tuned conversational LLM, derived from Jackrong/GPT-5-Distill-Qwen3-4B-Instruct. Its primary distinction is the application of the Heretic tool to significantly reduce censorship and refusal rates in both English and Chinese. The original model was fine-tuned on ShareGPT data and distilled from GPT-5 responses, aiming for high-quality, natural-sounding dialogues with low computational overhead.
Key Differentiators
- Decensored Output: Achieves a refusal rate of 3/100 in English and 10/100 in Chinese, a substantial reduction from the original model's 97/100 (en) and 84/100 (zh).
- Dual-Language Support: Maintains strong performance in both English and Chinese, with the decensoring process affecting both languages.
- Lightweight: At 4 billion parameters, it offers fast inference and low resource usage, suitable for lightweight applications.
- Conversational Fluency: Mimics GPT-5's conversational style and helpfulness due to its distillation process.
Intended Use Cases
This model is recommended for:
- Casual chat in Chinese/English without censorship.
- General knowledge explanations and reasoning guidance.
- Code suggestions and simple debugging tips.
- Writing assistance, including editing, summarizing, and rewriting.
- Role-playing conversations that require less restrictive responses.
It is not suitable for high-risk decision-making or real-time factual tasks.