Creekside/Qwen-3B-gsm8k-GRPO is a 3.1 billion parameter Qwen2 model developed by Creekside, fine-tuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit. This model was trained 2x faster using Unsloth and Huggingface's TRL library, indicating an optimization for efficient training. With a context length of 32768 tokens, it is designed for tasks requiring substantial input processing.
No reviews yet. Be the first to review!