electroglyph/Qwen3-4B-Instruct-2507-uncensored

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Nov 8, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

electroglyph/Qwen3-4B-Instruct-2507-uncensored is a 4 billion parameter instruction-tuned causal language model based on the Qwen3 architecture. Developed by electroglyph, this model is a fine-tuned version of Qwen3-4B-Instruct-2507 specifically modified to remove censorship. It maintains a 32768 token context length and is primarily intended for applications requiring an uncensored conversational AI.

Loading preview...

Model Overview

This model, electroglyph/Qwen3-4B-Instruct-2507-uncensored, is a 4 billion parameter instruction-tuned language model derived from the Qwen3-4B-Instruct-2507 base. Developed by electroglyph, its primary modification involves a minimal SFT (Supervised Fine-Tuning) process aimed at removing inherent censorship from the original model while preserving its core capabilities.

Key Characteristics

  • Base Model: Qwen3-4B-Instruct-2507.
  • Parameter Count: 4 billion parameters.
  • Context Length: Supports a context window of 32768 tokens.
  • Fine-tuning Objective: Specifically fine-tuned to eliminate censorship with minimal training to retain the base model's integrity.
  • GGUF Availability: A UD-Q4_K_XL GGUF quantization is provided, generated using quant_clone for efficient local deployment.

Use Cases

This model is suitable for developers and applications that require an instruction-following language model with a 32K context length, where the removal of censorship is a critical requirement. It is designed for scenarios where the base Qwen3-4B-Instruct-2507's original content filtering is undesirable.