aksamlan01/Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm-robust_placid_cat
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Nov 17, 2025Architecture:Transformer Warm

The aksamlan01/Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm-robust_placid_cat model is a 0.5 billion parameter instruction-tuned causal language model, likely based on the Qwen2.5 architecture. With a substantial context length of 131,072 tokens, this model is designed for processing extensive inputs. Its 'Coder' designation suggests an optimization for code-related tasks, making it suitable for applications requiring code generation, completion, or analysis.

Loading preview...