FAHAB/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_wily_sardine
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Dec 7, 2025Architecture:Transformer Warm
FAHAB/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_wily_sardine is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture, developed by FAHAB. With a substantial 32,768 token context length, this model is designed for general-purpose conversational AI tasks. Its primary strength lies in processing and generating human-like text over extended interactions, making it suitable for applications requiring deep contextual understanding.
Loading preview...