Elliott/LUFFY-Qwen-Math-7B-Zero

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 19, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold

Elliott/LUFFY-Qwen-Math-7B-Zero is a 7.6 billion parameter model based on the Qwen architecture, developed by Elliott. It utilizes a reinforcement learning framework that integrates off-policy reasoning traces and policy shaping to enhance learning. This model is specifically optimized for complex mathematical reasoning and generalization, achieving state-of-the-art results among zero-RL methods on competitive math benchmarks and out-of-distribution tasks.

Loading preview...

Overview

Elliott/LUFFY-Qwen-Math-7B-Zero is a 7.6 billion parameter model that employs a novel reinforcement learning framework named LUFFY. This framework bridges zero-RL and imitation learning by incorporating off-policy reasoning traces and introducing policy shaping through regularized importance sampling. It is built upon GRPO and designed to emphasize crucial, low-probability actions, leading to improved generalization.

Key Capabilities

  • Off-Policy Guidance: Integrates external reasoning traces from stronger models to accelerate learning.
  • Dynamic Balance: Adapts its learning strategy to balance imitation and exploration throughout training.
  • Policy Shaping: Focuses on important actions often overlooked by standard policy gradients, enhancing generalization.
  • Mathematical Reasoning: Achieves state-of-the-art performance on six competition-level math benchmarks, including AIME, AMC, and MATH-500, surpassing both on-policy RL and SFT methods.
  • Out-of-Distribution Generalization: Demonstrates strong generalization capabilities on tasks like ARC-C, GPQA, and MMLU-Pro, with an average gain of over +6.2 compared to other models.

Good For

  • Solving complex mathematical problems and competitive math challenges.
  • Applications requiring robust reasoning and generalization to unseen tasks.
  • Scenarios where leveraging off-policy demonstrations can significantly improve learning efficiency and performance.