molla202/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_invisible_hippo
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 21, 2025Architecture:Transformer Warm

molla202/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_invisible_hippo is a 0.5 billion parameter instruction-tuned language model, fine-tuned from Gensyn/Qwen2.5-0.5B-Instruct. It was trained using the TRL framework and incorporates the GRPO method, which is designed to enhance mathematical reasoning capabilities. This model is particularly suited for tasks requiring improved logical and mathematical problem-solving, leveraging its 131,072 token context length.

Loading preview...