Helcyon-Mercury-12B-v3.2: State-of-the-Art Conversational Presence
Helcyon-Mercury-12B-v3.2, developed by HardWire and XeyonAI, is a 12 billion parameter conversational AI model built upon the Mistral Nemo 12B base. This version (3.2) focuses on delivering a "sharp, human-sounding" conversational experience, emphasizing depth, tone-awareness, and consistent identity across extended dialogues. It boasts a substantial 32768 token context length, enabling long-form conversations without loss of coherence.
Key Capabilities
- Enhanced Tone Control: Refined emotional matching, natural humor, and deeper empathy without "therapy-speak."
- Conversational Refinement: Smoother turn-taking, improved context awareness over long threads, and sharper cue detection.
- Expanded Emotional Range: Authentic warmth, better handling of serious topics, and appropriate shifts in tone.
- Consistent Identity: Maintains character and tone without drift or resets.
- Roleplay Mastery: Designed for immersive, aware, and engaging character interactions.
- Real-World Tasks: Capable of administrative letters, rewrites, and summaries.
- "GPT-4o Vibe": Described as sharp, present, and responsive, with a "zero filter" approach.
Good for
- Users desiring natural, long-form conversations with emotional intelligence.
- Creative writing, storytelling, and narrative support.
- Deep roleplay and immersive character interaction.
- Professional and administrative writing tasks requiring nuanced tone.
- Applications where consistent AI identity and context tracking are crucial.
Helcyon-Mercury 3.2 was trained using full weight training on Mistral Nemo 12B combined with LoRA merges for specific refinements in tone, humor, and empathy. The training dataset included conversational examples, perspective switching, formal task writing, creative storytelling, and deep roleplay, utilizing ChatML and DPO formats. The model is licensed under Apache 2.0, allowing free commercial and private use with attribution.