DQN-Labs/dqnCode-v0.4-1.5B-HF
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

DQN-Labs/dqnCode-v0.4-1.5B-HF is a 1.5 billion parameter language model, converted to MLX format from Qwen/Qwen2.5-Coder-1.5B-Instruct, with a 32768 token context length. This model is specifically designed and optimized for code generation and understanding tasks. It provides efficient performance for developers working within the MLX ecosystem, leveraging its base as a coder-focused instruction-tuned model.

Loading preview...