Azazelle/Maylin-7b
Azazelle/Maylin-7b is a 7 billion parameter language model based on the Mistral-7B-v0.1 architecture, created through a DARE merge. This model is specifically designed to enhance coherence and reduce undesirable biases present in the Argetsu model. It aims to provide a more balanced and focused output for general language generation tasks.
Loading preview...
Maylin-7b Model Overview
Maylin-7b is a 7 billion parameter language model developed by Azazelle, built upon the Mistral-7B-v0.1 base architecture. This model was created using a DARE (DARE_TIES) merge technique, combining several other models to achieve its specific characteristics.
Key Capabilities
- Enhanced Coherence: The primary goal of Maylin-7b is to improve the overall coherence of generated text.
- Bias Reduction: It specifically targets and aims to mitigate certain undesirable biases, such as excessive 'horniness', observed in its constituent models like Argetsu.
- General Language Generation: Suitable for a variety of text generation tasks where balanced and coherent output is desired.
Merge Details
The model was constructed using a DARE_TIES merge method, integrating the following models with specific weights and densities:
mistralai/Mistral-7B-v0.1(base model)SanjiWatsuki/Sonya-7B(weight: 0.45, density: 0.75)Azazelle/Argetsu(weight: 0.39, density: 0.70)Azazelle/Tippy-Toppy-7b(weight: 0.22, density: 0.52)
This specific merge configuration was chosen to refine the output characteristics, making Maylin-7b a more controlled and reliable option for general use cases.