TMLR-Group-HF/GT-Llama-3.2-3B-Instruct-MATH

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Aug 14, 2025License:mitArchitecture:Transformer Open Weights Cold

TMLR-Group-HF/GT-Llama-3.2-3B-Instruct-MATH is a 3.2 billion parameter instruction-tuned Llama-3.2 model developed by TMLR-Group-HF, trained using the GRPO Ground Truth method with the MATH training set. This model is specifically optimized for eliciting reasoning in large language models through the novel Co-rewarding self-supervised reinforcement learning framework. It is designed to address training stability issues in self-rewarding methods, making it suitable for complex mathematical and reasoning tasks. The model features a 32768 token context length, enhancing its ability to process longer problem descriptions.

Loading preview...

Model Overview

TMLR-Group-HF/GT-Llama-3.2-3B-Instruct-MATH is a 3.2 billion parameter instruction-tuned model based on the Llama-3.2 architecture, developed by TMLR-Group-HF. It was trained using the GRPO Ground Truth method, specifically leveraging the MATH training set. This model is a key checkpoint from research on the Co-rewarding framework, detailed in the paper "Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models" (arXiv:2508.00410).

Key Capabilities

  • Enhanced Reasoning: Utilizes the novel Co-rewarding self-supervised reinforcement learning (RL) framework to improve reasoning capabilities.
  • Training Stability: Addresses the training collapse issue common in single-view self-rewarding methods by seeking complementary supervision from multiple views.
  • Mathematical Proficiency: Specifically trained on the MATH dataset, indicating a strong focus on mathematical problem-solving.
  • Instruction Following: Instruction-tuned to better understand and respond to user prompts.

Good For

  • Mathematical Problem Solving: Ideal for applications requiring robust mathematical reasoning.
  • Research in RL and Reasoning: Useful for researchers exploring stable self-supervised reinforcement learning techniques for LLMs.
  • Complex Reasoning Tasks: Suitable for scenarios where eliciting detailed reasoning steps is crucial.