DASD-4B-Thinking is a 4 billion parameter dense language model developed by Alibaba-Apsara, specialized in long chain-of-thought (Long-CoT) reasoning across mathematics, code generation, and scientific reasoning. Post-trained from Qwen3-4B-Instruct-2507 and distilled from gpt-oss-120b using a novel distribution-aligned sequence distillation pipeline, it achieves strong reasoning performance with only 448K training samples. This model excels in complex reasoning tasks and can run on consumer hardware, offering a data-efficient solution for advanced analytical applications.
No reviews yet. Be the first to review!