Overview
Amber Fable 1.0: Specialized Mathematical Reasoning Model
Amber Fable 1.0, developed by Arioron, is a 1.7 billion parameter language model built upon the Qwen3-1.7B base using LoRA (Low-Rank Adaptation). Its primary focus is mathematical reasoning and algorithmic logic, making it a highly efficient solution for tasks requiring precise numerical and logical processing.
Key Capabilities & Performance
- Exceptional Math Performance: Achieves 75.0% accuracy on GSM8K (Grade School Math) and 55.0% accuracy on MATH (Advanced Math Problems), demonstrating strong capabilities for its size.
- Algorithmic Logic: Shows a 42.0% Pass@1 on HumanEval, indicating proficiency in Python coding and algorithmic problem-solving.
- Specialized Tuning: Fine-tuned on synthetic data and textbooks to optimize for mathematical tasks, which results in a trade-off with general world knowledge (MMLU score of 22.0%).
Ideal Use Cases
- Educational Tools: Excellent for applications in tutoring, generating math problems, and providing step-by-step solutions.
- Logic-Based Tasks: Suitable for puzzles, scripting, and other scenarios requiring precise logical deduction.
- Resource-Efficient Deployment: As a 1.7B parameter model, it offers high performance in its specialized domain while being relatively lightweight for deployment.