taki555/Qwen3-4B-Instruct-2507-Art
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 27, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Loading
taki555/Qwen3-4B-Instruct-2507-Art is a 4 billion parameter instruction-tuned causal language model based on the Qwen3 architecture, developed by Taiqiang Wu, Zenan Xu, Bo Zhou, and Ngai Wong. This model is specifically optimized for efficient Chain-of-Thought (CoT) reasoning, producing short yet accurate reasoning trajectories. It utilizes reward shaping and Reinforcement Learning to minimize computational overhead while maintaining scaled reasoning benefits, making it suitable for tasks requiring concise and precise reasoning.
Loading preview...