johnmayhem1/Qwen-7B-Story-Finetuned
johnmayhem1/Qwen-7B-Story-Finetuned is a 7.6 billion parameter language model based on Qwen2.5-7B-Instruct, fine-tuned for detailed, high-stakes storytelling with a focus on physical tension and psychological realism. It excels at adhering to complex environmental logic and constraint boundaries, avoiding 'location bleed' while generating creative and non-repetitive descriptive prose. This model also integrates robust multi-turn conversational intelligence, making it suitable for interactive narrative applications.
Loading preview...
Overview
This model, johnmayhem1/Qwen-7B-Story-Finetuned, is a specialized variant of the Qwen2.5-7B-Instruct base model, fine-tuned using QLoRA (Rank 32, Alpha 64). Its primary focus is on advanced narrative generation, combining detailed storytelling with strong conversational capabilities.
Key Capabilities
- High-Stakes Storytelling: Engineered for narratives emphasizing physical tension and psychological realism.
- Environmental Logic Adherence: Specifically trained to respect complex environmental constraints and avoid 'location bleed' in descriptions.
- Creative & Non-Repetitive Prose: Generates highly creative and varied descriptive text.
- Multi-Turn Conversational Intelligence: Maintains robust and coherent dialogue over multiple turns.
- Custom Training Data: Fine-tuned on a unique blend of custom narrative synthetic mix and OpenHermes-2.5 conversational data.
Good For
This model is ideal for applications requiring sophisticated narrative generation, such as interactive fiction, role-playing games, or creative writing assistants where consistent world-building, detailed descriptions, and engaging dialogue are crucial. It supports various GGUF quantizations, including q5_k_m as the recommended 'sweet spot' for 7B models, and is compatible with tools like SillyTavern, LMStudio, and KoboldCpp, utilizing ChatML formatting.