TIGER-Lab/SWE-Next-14B

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026License:mitArchitecture:Transformer Open Weights Cold

SWE-Next-14B is a 14.8 billion parameter repository-level software engineering agent developed by TIGER-Lab, fine-tuned from Qwen/Qwen2.5-Coder-14B-Instruct. It is trained on execution-grounded trajectories from real merged pull requests, emphasizing clean repository-level repair traces and recovery-style debugging. This model excels at scalable real-world software engineering tasks, improving performance on SWE-Bench Verified and SWE-Bench Lite.

Loading preview...

SWE-Next-14B Overview

SWE-Next-14B is a 14.8 billion parameter language model developed by TIGER-Lab, specifically designed as a repository-level software engineering agent. It is fine-tuned from the Qwen/Qwen2.5-Coder-14B-Instruct base model using full-parameter supervised fine-tuning (SFT).

Key Capabilities & Training

This model is trained on 3,693 selected SFT trajectories from the SWE-Next collection, which are execution-grounded and derived from real merged pull requests. A core innovation is the use of repo-quarter profiles for efficient and reproducible environment management, allowing the processing of 3,971 seed repositories and 102,582 candidate commit pairs to generate 2,308 self-verifying instances. The training data focuses on repository-level repair traces and recovery-style debugging, rather than isolated code completion.

Performance & Use Cases

SWE-Next-14B demonstrates improved downstream pass@1 scores on SWE-Bench Verified and SWE-Bench Lite, making large-scale executable data collection more practical. With a context length of 32,768 tokens, it is well-suited for complex software engineering tasks requiring deep contextual understanding of codebases. This model is ideal for automating and assisting with real-world software development workflows, particularly in debugging, code repair, and agent-based software engineering.