totem205/Qwen3-1.7B-base-MED
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Warm

totem205/Qwen3-1.7B-base-MED is a 2 billion parameter language model based on the Qwen3 architecture. This model is a base model, meaning it is not instruction-tuned and is intended for further fine-tuning or specific downstream applications. Its primary utility lies in serving as a foundational component for specialized AI tasks where a compact yet capable model is required.

Loading preview...