Athenea-4B-Math: Specialized Mathematical Reasoning
Athenea-4B-Math is a 4 billion parameter model developed by Aquiles-ai, fine-tuned from the Huihui-Qwen3-4B-Thinking-2507-abliterated base. Its core specialization lies in mathematical reasoning and problem-solving, particularly in areas such as calculus, algebra, and equation solving. The model is trained to generate detailed, step-by-step reasoning processes, encapsulated within <think> and </think> tags, enhancing transparency and logical consistency.
Key Capabilities
- Step-by-step mathematical reasoning: Generates explicit thought processes for complex problems.
- Specialization: Highly proficient in calculus, algebra, and general mathematical problem-solving.
- Uncensored output: Provides unrestricted output generation for full reasoning transparency.
- Improved logical consistency: Achieved through focused fine-tuning on high-quality mathematical datasets.
- Broad compatibility: Works with open inference frameworks like Transformers and vLLM.
Training and Usage
The model was fine-tuned using the proprietary dataset Aquiles-ai/Athenea-Math-100k, which includes diverse math problems with reasoning traces. It supports a substantial context length of 40960 tokens. For deployment, it integrates seamlessly with vLLM for accelerated inference and offers an open-source playground for local experimentation.