hypaai/Qwen3-0.6B_2026-03-29_23-35-21
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Loading

hypaai/Qwen3-0.6B_2026-03-29_23-35-21 is a fine-tuned 0.8 billion parameter causal language model based on the Qwen3-0.6B architecture, developed by Qwen and further adapted by hypaai. This model features a substantial context length of 32768 tokens, making it suitable for tasks requiring extensive contextual understanding. It is a specialized iteration of the Qwen3 series, optimized through a specific training procedure.

Loading preview...