emmanuelaboah01/qiu-v8-qwen3-4b-instruct-primary-stage1-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026Architecture:Transformer Warm

The emmanuelaboah01/qiu-v8-qwen3-4b-instruct-primary-stage1-merged is a 4 billion parameter instruction-tuned language model based on the Qwen3 architecture, featuring a 32768 token context length. This model is a primary stage 1 merge, indicating an early development phase. Its specific differentiators and primary use cases are not detailed in the provided information.

Loading preview...