DQN-Labs/dqncode1-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

DQN-Labs/dqncode1-16bit is a 4 billion parameter Qwen3-based language model developed by DQN-Labs, finetuned using Unsloth and Huggingface's TRL library. This model is optimized for efficient training, achieving 2x faster finetuning speeds. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient training methodology.

Loading preview...