Okwgreg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_chinchilla
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Oct 22, 2025Architecture:Transformer Warm

Okwgreg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_chinchilla is a 0.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture, developed by Okwgreg. With a substantial context length of 131072 tokens, this model is designed for general instruction following. Its compact size combined with a very long context window suggests potential for efficient processing of extensive textual inputs in various applications.

Loading preview...