joekarim/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_peckish_pigeon
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Sep 28, 2025Architecture:Transformer Warm

The joekarim/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_peckish_pigeon model is a 0.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. This model is designed for general language understanding and generation tasks, leveraging its compact size for efficient deployment. With a substantial context length of 131072 tokens, it is particularly suited for applications requiring processing of extensive input sequences.

Loading preview...