EleutherAI/llemma_7b_muinstruct_camelmath is a 7 billion parameter instruction-following model based on the Llemma 7B architecture, fine-tuned by EleutherAI. It is specifically optimized for mathematical reasoning and problem-solving, trained on the μInstruct and camel-ai/math datasets. This model demonstrates strong performance on math-related benchmarks, surpassing other 7B Llama-2 based models on the Hungarian Math Exam. It is designed for applications requiring robust mathematical capabilities.
Loading preview...
Overview
EleutherAI/llemma_7b_muinstruct_camelmath is a 7 billion parameter instruction-following model developed by EleutherAI. It is a fine-tuned version of the Llemma 7B base model, specifically trained on a combination of the μInstruct and camel-ai/math datasets.
Key Capabilities
- Mathematical Reasoning: The model is specialized in mathematical problem-solving, leveraging its training on dedicated math datasets.
- Instruction Following: It is designed to follow instructions effectively, making it suitable for interactive applications.
- Performance: It achieves a 25% score on the Hungarian Math Exam, outperforming other 7B Llama-2 based models like Code Llama 7B (8%), MetaMath 7B (20%), and the base Llemma 7B (23%) in few-shot settings. It also compares favorably against Mistral 7B (22%) and MetaMath Mistral 7B (29%).
Input Formatting
Users should format input queries as Input:{input}\n\nResponse:. A notable detail is that due to a training error, the model's end-of-sequence token ID is 0 instead of the standard 2 for Llama-2 based models, though inference APIs should handle this automatically.
Good for
- Applications requiring strong mathematical problem-solving abilities.
- Tasks involving instruction-based mathematical reasoning.
- Use cases where a 7B parameter model with specialized math capabilities is needed.