emmanuelaboah01/qiu-v8-qwen3-4b-stage3-enriched-fullseq-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Warm

The emmanuelaboah01/qiu-v8-qwen3-4b-stage3-enriched-fullseq-merged model is a 4 billion parameter language model with a 32768 token context length. This model is based on the Qwen architecture, indicating a focus on general language understanding and generation tasks. Its specific enrichment and full sequence merging suggest potential optimizations for enhanced performance across a broad range of applications.

Loading preview...