malimikinko/Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm-furry_lively_mink
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Nov 20, 2025Architecture:Transformer Warm
This is a 0.5 billion parameter instruction-tuned model, likely based on the Qwen2.5 architecture, developed by malimikinko. With a substantial context length of 131072 tokens, it is designed for general language understanding and generation tasks. The model's specific differentiators and primary use cases are not detailed in the provided information.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–