Luni/StarDust-12b-v1
StarDust-12b-v1 is a 12 billion parameter language model created by Luni, merged using the DARE TIES method with Sao10K/MN-12B-Lyra-v3 as its base. This model is specifically optimized for role-playing scenarios, offering a vibrant and less generic prose style. It is designed to excel in interactive narrative generation rather than direct conversational output or general-purpose tasks, with a context length of 32768 tokens.
Loading preview...
Model Overview
Luni/StarDust-12b-v1 is a 12 billion parameter language model developed by Luni, created through a merge of several models using the DARE TIES method, with Sao10K/MN-12B-Lyra-v3 as its foundational base. The merge aimed to produce a model capable of more vibrant and less generic prose, specifically tailored for interactive storytelling.
Key Capabilities
- Role-playing: The model is primarily designed for generating engaging and dynamic role-playing narratives.
- Prose Style: It offers a distinct prose style, capable of expressing both gentle and harsh tones as requested within a role-play context.
- Context Length: Supports a substantial context length of 32768 tokens.
Intended Use and Limitations
This model is intended for role-playing and is not recommended for direct conversational output, general-purpose tasks, or direct instruction following. Initial feedback indicates a tendency towards flirting, which can be mitigated using system prompts to steer interactions towards SFW and non-flirty content. The model performs best with the ChatML prompting format.
Performance Metrics
Evaluations on the Open LLM Leaderboard show an average score of 23.17. Specific metrics include:
- IFEval (0-Shot): 54.59
- BBH (3-Shot): 34.45
- MMLU-PRO (5-shot): 26.80
Quants Available
- GGUF: mradermacher/StarDust-12b-v1-GGUF
- weighted/imatrix GGUF: mradermacher/StarDust-12b-v1-i1-GGUF
- exl2: lucyknada/Luni_StarDust-12b-v1-exl2