n0ctyx/Qwen3-4B-Instruct-Uncensored
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Loading

n0ctyx/Qwen3-4B-Instruct-Uncensored is a 4 billion parameter instruction-tuned causal language model based on the Qwen3-4B-Instruct-2507 architecture, developed by n0ctyx. This model has undergone directional abliteration to remove safety refusals, allowing it to respond to a broader range of prompts without artificial gatekeeping. With a context length of 262,144 tokens, it retains the original model's intelligence while significantly reducing refusal rates, making it suitable for creative writing, red-teaming, and unfiltered assistance.

Loading preview...