grimjim/kunoichi-lemon-royale-v3-32K-7B
The grimjim/kunoichi-lemon-royale-v3-32K-7B is a 7 billion parameter language model based on the Mistral architecture, created by grimjim using a merge densification approach. This model integrates a highly creative model at a low weight to enhance output variability while maintaining coherence. It is optimized for creative text generation and is tested with a practical context length of at least 16K tokens.
Loading preview...
Model Overview
The kunoichi-lemon-royale-v3-32K-7B is a 7 billion parameter language model developed by grimjim, built upon the Mistral architecture. It was created using mergekit and employs a technique called merge densification. This method involves merging a highly creative, dense model at a very low weight (0.02) into a base model, aiming to increase output variability without compromising the base model's coherence.
Key Capabilities
- Enhanced Creativity: Designed to produce more varied and creative outputs due to the merge densification approach.
- Mistral-based Architecture: Leverages the robust foundation of the Mistral 7B model.
- Extended Context Length: Tested for practical use with a context length of at least 16K tokens, though the model name suggests 32K.
- Merge Method: Utilizes the task arithmetic merge method, with
grimjim/kunoichi-lemon-royale-v2-32K-7Bas the base andgrimjim/rogue-enchantress-32k-7Bas the additional merged model.
Recommended Usage
This model is particularly suited for applications requiring creative text generation where output diversity is valued. It has been tested with ChatML instruct templates, a temperature of 1.0, and a minP of 0.02, suggesting its suitability for open-ended and imaginative tasks. The low merge weight of 0.02 was intentionally chosen to align with the minP setting, further emphasizing its design for nuanced output control.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.