swadeshb/Llama-3.2-3B-Instruct-CRPO-V20

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Nov 30, 2025Architecture:Transformer Cold

The swadeshb/Llama-3.2-3B-Instruct-CRPO-V20 is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. This model was trained using the GRPO method, which is designed to enhance mathematical reasoning capabilities. With a context length of 32768 tokens, it is optimized for tasks requiring robust logical and mathematical problem-solving.

Loading preview...

Model Overview

This model, swadeshb/Llama-3.2-3B-Instruct-CRPO-V20, is a fine-tuned variant of the meta-llama/Llama-3.2-3B-Instruct base model, featuring 3.2 billion parameters and a substantial 32768-token context length. It was developed by swadeshb and trained using the TRL framework.

Key Capabilities

  • Enhanced Mathematical Reasoning: The model's primary differentiator is its training with GRPO (Gradient-based Reward Policy Optimization), a method introduced in the "DeepSeekMath" paper. This technique specifically aims to improve the model's ability to handle complex mathematical and logical reasoning tasks.
  • Instruction Following: As an instruction-tuned model, it is designed to accurately follow user prompts and generate relevant responses.
  • Large Context Window: The 32768-token context length allows for processing and generating longer, more complex texts while maintaining coherence.

Good For

  • Mathematical Problem Solving: Ideal for applications requiring strong mathematical reasoning, such as solving equations, logical puzzles, or generating step-by-step solutions.
  • Complex Instruction Following: Suitable for tasks where detailed and nuanced instructions need to be interpreted and executed precisely.
  • General Conversational AI: Can be used for various instruction-based text generation tasks, leveraging its Llama-3.2 foundation.