ybyby624/Grace-Qwen3-4B-QASPER
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 5, 2026License:mitArchitecture:Transformer Open Weights Warm
The ybyby624/Grace-Qwen3-4B-QASPER is a 4 billion parameter language model based on the Qwen3 architecture, developed by ybyby624. This model is fine-tuned for specific applications, leveraging its 32768 token context length for processing extensive inputs. Its primary differentiator lies in its specialized fine-tuning, making it suitable for tasks requiring nuanced understanding and generation within its domain.
Loading preview...