theworldftx/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_mangy_kangaroo
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 6, 2025Architecture:Transformer Warm
The theworldftx/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_mangy_kangaroo is a 0.5 billion parameter instruction-tuned language model, fine-tuned from Gensyn/Qwen2.5-0.5B-Instruct. This model was trained using the TRL framework and incorporates the GRPO method, which is designed to enhance mathematical reasoning capabilities. It is optimized for tasks requiring robust logical and mathematical problem-solving, making it suitable for applications in scientific computing and data analysis.
Loading preview...