Overview
Elliott/LUFFY-Qwen-Math-7B-Zero is a 7.6 billion parameter model that employs a novel reinforcement learning framework named LUFFY. This framework bridges zero-RL and imitation learning by incorporating off-policy reasoning traces and introducing policy shaping through regularized importance sampling. It is built upon GRPO and designed to emphasize crucial, low-probability actions, leading to improved generalization.
Key Capabilities
- Off-Policy Guidance: Integrates external reasoning traces from stronger models to accelerate learning.
- Dynamic Balance: Adapts its learning strategy to balance imitation and exploration throughout training.
- Policy Shaping: Focuses on important actions often overlooked by standard policy gradients, enhancing generalization.
- Mathematical Reasoning: Achieves state-of-the-art performance on six competition-level math benchmarks, including AIME, AMC, and MATH-500, surpassing both on-policy RL and SFT methods.
- Out-of-Distribution Generalization: Demonstrates strong generalization capabilities on tasks like ARC-C, GPQA, and MMLU-Pro, with an average gain of over +6.2 compared to other models.
Good For
- Solving complex mathematical problems and competitive math challenges.
- Applications requiring robust reasoning and generalization to unseen tasks.
- Scenarios where leveraging off-policy demonstrations can significantly improve learning efficiency and performance.