swadeshb/Llama-3.2-3B-Instruct-CRPO-V20
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Nov 30, 2025Architecture:Transformer Loading

The swadeshb/Llama-3.2-3B-Instruct-CRPO-V20 is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. This model was trained using the GRPO method, which is designed to enhance mathematical reasoning capabilities. With a context length of 32768 tokens, it is optimized for tasks requiring robust logical and mathematical problem-solving.

Loading preview...