eurb1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_gliding_salamander
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 28, 2025Architecture:Transformer Warm

eurb1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_gliding_salamander is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen2.5-0.5B-Instruct. This model leverages the GRPO training method, as introduced in the DeepSeekMath paper, and supports a substantial context length of 131072 tokens. It is optimized for tasks benefiting from advanced mathematical reasoning and structured problem-solving, making it suitable for applications requiring precise and logical outputs.

Loading preview...