Naphula/StormSeeker-24B-v1
Naphula/StormSeeker-24B-v1 is a 24 billion parameter merged language model, created using a custom flux merge method from six base models including Loki, PaintedFantasy, and Hearthfire. Optimized for narrative and roleplay, it exhibits uncensored responses and can produce graphic content. The model performs best with Mistral non-Tekken prompting, achieving a score of 8867, and has a context length of 32768 tokens.
Loading preview...
StormSeeker-24B-v1 Overview
StormSeeker-24B-v1 is a 24 billion parameter merged language model developed by Naphula, utilizing a custom flux merge method. This model integrates six distinct base models, with Loki, PaintedFantasy, and Hearthfire having a notable influence on its characteristics. It is specifically designed for generating narratives and roleplay content, including potentially violent and graphic erotic material, and operates with a largely uncensored output.
Key Characteristics & Performance
- Uncensored Output: The model is noted for its uncensored nature, responding to some harmful prompts without refusals. A light jailbreak can effectively bypass most censorship.
- Optimized Prompting: For optimal performance, users are advised to use Mistral non-Tekken prompting, which yields a significantly higher score (8867) compared to Tekken prompting (6933).
- Merge Method: The model was created using a
fluxmerge method (version 5, Y6 config), which involved 1005 iterations to achieve maximum BF16 fidelity. - Quantization Benefits: Merges made with
fluxmethod are suggested to benefit greatly from smaller block sizes like IQ4_NL quantizations, potentially performing on par with or better than Q6_K, though this claim is not yet empirically verified.
Use Cases & Considerations
- Narrative and Roleplay: Ideal for applications requiring creative writing, detailed narratives, and roleplay scenarios, especially those that may involve graphic or uncensored content.
- Content Warning: Users should be aware of its capacity to produce violent and graphic erotic content and adjust system prompts accordingly.
- Ablation: An MPOA-Adapter is available for ablations to modify the model's behavior.
Available Quantizations
Due to constraints, the model is primarily uploaded in IQ4_NL, Q6_K, and Q8_K_XL GGUF formats. Recommendations for other quantizations (e.g., IQ4_XS, EXL3, MLX) are provided via links to other community members.