Yancyong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_prowling_cheetah
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Jun 11, 2025Architecture:Transformer Warm

Yancyong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_prowling_cheetah is a 0.5 billion parameter instruction-tuned language model, fine-tuned from unsloth/Qwen2.5-0.5B-Instruct. It was trained using the GRPO method, which is designed to enhance mathematical reasoning capabilities. This model is suitable for tasks requiring instruction following and potentially benefits from improved reasoning, especially in mathematical contexts, due to its specialized training approach. It supports a substantial context length of 131072 tokens.

Loading preview...