LatitudeGames/Muse-12B is a 12 billion parameter language model built on the Mistral Nemo architecture, fine-tuned through supervised fine-tuning (SFT) and two stages of Direct Preference Optimization (DPO). It excels at generating long, emotionally rich narratives with strong character relationships and maintains narrative coherence over its 32768-token context length. Muse-12B is particularly optimized for creative writing, roleplay, and text adventure scenarios, delivering natural expression and emotional depth.
No reviews yet. Be the first to review!