Kukedlc/NeuralMaths-Experiment-7b
Kukedlc/NeuralMaths-Experiment-7b is a 7 billion parameter model merge, specifically engineered for mathematical reasoning and problem-solving. This model, created by Kukedlc, combines several specialized models using the dare_ties merge method, achieving top performance on the GSM8K leaderboard. With an 8192-token context length, it excels in complex arithmetic and logical tasks, making it suitable for applications requiring strong quantitative capabilities.
Loading preview...
NeuralMaths-Experiment-7b: A Math-Optimized 7B Model
NeuralMaths-Experiment-7b is a 7 billion parameter language model developed by Kukedlc, distinguished by its strong performance in mathematical reasoning. This model is a strategic merge of several specialized models, including WizardLM/WizardMath-7B-V1.1, mlabonne/NeuralDaredevil-7B, Kukedlc/Neural4gsm8k, Eric111/Mayo, and Kukedlc/NeuralSirKrishna-7b, utilizing the dare_ties merge method. This unique composition has propelled it to the top of the GSM8K leaderboard, indicating its proficiency in grade school math problems.
Key Capabilities
- Exceptional Mathematical Reasoning: Achieves a 75.21% score on GSM8k (5-shot), demonstrating strong capabilities in arithmetic and logical problem-solving.
- Merged Architecture: Benefits from the combined strengths of multiple models, specifically tuned for mathematical and reasoning tasks.
- Solid General Performance: Exhibits competitive scores across various benchmarks, including 69.71% on AI2 Reasoning Challenge and 65.01% on MMLU.
Good For
- Applications requiring high accuracy in mathematical problem-solving.
- Tasks involving complex reasoning and quantitative analysis.
- Developers seeking a specialized model for educational tools or scientific computing where mathematical precision is crucial.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.