syaeve/Qwen3-1.7B-base-MED
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Warm

syaeve/Qwen3-1.7B-base-MED is a 2 billion parameter base model from the Qwen3 family, featuring a 32768 token context length. This model is a foundational large language model, designed for general-purpose language understanding and generation tasks. Its base architecture makes it suitable for further fine-tuning across various applications requiring robust language capabilities.

Loading preview...