emmanuelaboah01/qiu-v8-qwen3-4b-stage3-hard-4epoch-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 23, 2026Architecture:Transformer Warm

The emmanuelaboah01/qiu-v8-qwen3-4b-stage3-hard-4epoch-merged model is a 4 billion parameter language model with a 32768 token context length. This model is a fine-tuned variant, likely based on the Qwen3 architecture, optimized for specific tasks after a hard 4-epoch training stage. Its primary application would be in scenarios requiring a compact yet capable language model with a substantial context window.

Loading preview...