DavidAU/Qwen3-Horror-Instruct-Uncensored-262K-ctx-4B

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Sep 7, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

DavidAU/Qwen3-Horror-Instruct-Uncensored-262K-ctx-4B is a 4 billion parameter instruction-tuned causal language model, fine-tuned by DavidAU on a large horror dataset for 5 epochs. Based on the Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1 model, it features a 32768 token context length and is optimized for generating vivid, detailed, and graphic horror prose. This model excels in creative writing tasks requiring depth and intensity, particularly when directed with specific content requirements.

Loading preview...

Model Overview

DavidAU/Qwen3-Horror-Instruct-Uncensored-262K-ctx-4B is a 4 billion parameter instruction-tuned model, fine-tuned by DavidAU specifically on a large horror dataset. This fine-tuning, performed over 5 epochs using Unsloth, significantly alters its prose generation and creative abilities, enhancing depth and detail in its output.

Key Capabilities

  • Horror Prose Generation: Excels at creating vivid, graphic, and detailed horror narratives, as demonstrated by provided examples.
  • Creative Writing: Improves general creative abilities, making prose more immersive and impactful.
  • Extended Context: Supports a maximum context length of 256K tokens, allowing for longer and more complex narrative generation.
  • Uncensored Potential: While not inherently explicit, it can be directed to generate graphic, cursing, or x-rated content with specific prompts, requiring a "push" to achieve desired intensity.

Recommended Usage

  • Creative Applications: Ideal for generating fictional stories, scenes, or descriptive passages within the horror genre.
  • Prompting: Suggested temperature settings of 0.8 to 1.5 (or up to 2) and a repetition penalty of 1.05 to 1.1 are recommended for optimal creative output. Using a detailed system prompt is also beneficial.
  • Technical Settings: Users are advised to use Jinja or CHATML templates and can benefit from adjusting "Smoothing_factor" in interfaces like KoboldCpp or oobabooga for smoother operation.