The n0ctyx/Qwen3-1.7B-uncensored model is a 1.7 billion parameter language model based on the Qwen3 architecture, featuring a 32,768 token context length. Developed by n0ctyx, this model has undergone directional obliteration to remove safety refusals, aiming to provide unfiltered responses. It is specifically designed for use cases requiring direct, uncensored content generation, creative writing, and red-teaming.
Loading preview...
Overview
n0ctyx/Qwen3-1.7B-uncensored is a 1.7 billion parameter language model derived from the original Qwen/Qwen3-1.7B base model. Its primary distinction lies in the removal of safety refusals through a technique called directional obliteration, which surgically modifies the model's activation space to eliminate artificial gatekeeping without retraining or dataset changes. This process aims to preserve the base model's intelligence while enabling it to respond to all prompts without refusal or lecturing.
Key Capabilities
- Uncensored Responses: Designed to provide direct answers and content without safety-based refusals, although the current version still exhibits some refusals (76/100 test prompts).
- Thinking Mode Support: Integrates a unique "thinking mode" (
<think>...</think>) for step-by-step reasoning, suitable for complex tasks like math or coding, alongside a non-thinking mode for general chat. - High Context Length: Features a substantial 32,768 token context window, allowing for processing and generating longer texts.
Good For
- Creative Writing & Roleplay: Ideal for generating content without typical AI content restrictions.
- Red-Teaming & Safety Research: Useful for exploring model vulnerabilities and testing safety mechanisms.
- Synthetic Dataset Generation: Can be employed to create diverse datasets without content filtering.
- Unfiltered Assistance: Provides direct, unhedged answers for various queries.