electroglyph/Qwen3-4B-Instruct-2507-uncensored-unslop
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Nov 9, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
The electroglyph/Qwen3-4B-Instruct-2507-uncensored-unslop model is a 4 billion parameter instruction-tuned language model, fine-tuned by electroglyph from the Qwen3-4B-Instruct-2507-uncensored base. It focuses on mitigating "slop" (repetitive or generic LLM output) often found in uncensored models, aiming for a more concise and less verbose generation style. This model is particularly suited for applications requiring direct and less "chatty" responses, especially where the base model's uncensored nature is desired but its verbosity is not.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–