gitas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skilled_gilded_bee
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Jun 11, 2025Architecture:Transformer Cold

The gitas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skilled_gilded_bee model is a 0.5 billion parameter instruction-tuned language model, fine-tuned from unsloth/Qwen2.5-0.5B-Instruct. It was trained using the TRL framework and incorporates the GRPO method, which is designed to enhance mathematical reasoning. This model is specifically optimized for tasks requiring advanced mathematical problem-solving capabilities, leveraging its 32768-token context length.

Loading preview...