HexaMind-Llama-3.1-8B-v25: A Generalist Model with SOTA Reasoning & Safety
HexaMind v25, developed by s21mind, is an 8 billion parameter Llama 3.1-based model designed to be the #1 performing 8B model in reasoning and safety. It utilizes a "Restoration Merge" strategy to combine state-of-the-art math and science reasoning with industrial-grade safety, effectively solving the "Alignment Tax" problem where safety measures often degrade general intelligence.
Key Capabilities & Differentiators
- SOTA Reasoning: Achieves significantly higher performance in hard math (38.00%) and science (28.00% GPQA) benchmarks compared to its Llama 3.1-8B baseline, representing a 4x improvement in math.
- Industrial-Grade Safety: Boasts approximately 90% truthfulness, positioning it as a leader in safety by enforcing strict hallucination boundaries derived from S21 Vacuum Theory.
- Topological Merge Strategy: Unlike models relying solely on more data, HexaMind v25 employs S21 Topological Filtering, using "Stable Data" by removing circular logic, disconnected facts, and high epistemic stuttering from training data.
- Targeted Training: The model's training recipe includes 40% math (NuminaMath), 30% reasoning (OpenHermes/SlimOrca), 20% safety (HexaMind DPO), and 10% general knowledge (MMLU "Quiz Mode"), all filtered for S21 Stability and CoT Coherence.
Use Cases
This model is ideal for applications requiring a powerful 8B LLM that can handle complex reasoning tasks while maintaining high levels of safety and truthfulness. Its strong performance in math and science, combined with its robust safety mechanisms, makes it suitable for environments where accuracy and reliability are paramount, such as educational tools, scientific research assistants, or secure conversational AI.