Ppoyaa/FusedKuno
FusedKuno is a 7 billion parameter language model created by Ppoyaa, formed by merging Nitral-AI/Kunocchini-7b-128k-test and Virt-io/FuseChat-Kunoichi-10.7B using a slerp merge method. This model leverages the strengths of its constituent models, offering a 8192 token context length. It is designed for general text generation tasks, combining different model characteristics for potentially enhanced performance.
Loading preview...
FusedKuno: A Merged 7B Language Model
FusedKuno is a 7 billion parameter language model developed by Ppoyaa, created through a strategic merge of two distinct models: Nitral-AI/Kunocchini-7b-128k-test and Virt-io/FuseChat-Kunoichi-10.7B. This merge was performed using the slerp (spherical linear interpolation) method via LazyMergekit, aiming to combine and balance the capabilities of its base models.
Key Characteristics
- Architecture: A composite model derived from two 7B and 10.7B parameter models, resulting in a 7B parameter output.
- Merge Method: Utilizes
slerp(spherical linear interpolation) for combining model weights, with specifictvalues applied to self-attention and MLP layers. - Context Length: Supports an 8192-token context window, suitable for processing moderately long inputs.
Use Cases
FusedKuno is a general-purpose language model, potentially suitable for a variety of text generation tasks where a blend of characteristics from its merged components could be beneficial. Developers can integrate it using the Hugging Face transformers library for applications requiring text completion, conversational AI, or content generation.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.