grimjim/kukulemon-7B
grimjim/kukulemon-7B is a 7 billion parameter language model created by grimjim through a SLERP merge of two Kunoichi reasoning models and KatyTheCutie/LemonadeRP-4.5.3, a model focused on roleplay. This merge aims to combine strong reasoning capabilities with enhanced roleplaying performance. While the model claims a 32K context length, informal testing suggests optimal coherence up to 8K tokens, making it suitable for applications requiring both logical processing and creative conversational abilities.
Loading preview...
kukulemon-7B: Merged for Reasoning and Roleplay
This 7 billion parameter model, grimjim/kukulemon-7B, is a unique merge designed to combine robust reasoning with strong roleplaying capabilities. Created by grimjim using the SLERP merge method, it integrates two distinct base models:
Key Capabilities
- Enhanced Reasoning: Built upon two Kunoichi models known for their reasoning prowess, aiming for a "dense" encoding of logical thought.
- Roleplay Optimization: Incorporates
KatyTheCutie/LemonadeRP-4.5.3, specifically targeting improved performance in roleplaying scenarios. - Flexible Prompting: Supports Alpaca format prompts and has been tested effectively with ChatML using specific temperature and minP settings.
Technical Details
- Merge Method: Utilizes the SLERP (Spherical Linear Interpolation) merge method for combining model weights.
- Context Length: While the model claims a 32K context window, informal testing suggests optimal coherence is maintained up to 8K tokens.
- Quantization Options: Available in various quantized formats, including GGUF-IQ-Imatrix, 8.0bpw h8 exl2, and Q8_0 GGUF.
Ideal Use Cases
This model is particularly well-suited for applications requiring a balance of logical understanding and creative, engaging conversational outputs, such as advanced chatbots, interactive storytelling, and complex roleplaying simulations where both coherence and imaginative responses are crucial.