SWE-bench/SWE-agent-LM-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jul 12, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
SWE-agent-LM-7B is a 7.6 billion parameter language model developed by the SWE-bench team, specifically fine-tuned for software engineering tasks. It was trained using the SWE-smith toolkit on 5,000 trajectories generated by SWE-agent + Claude 3.7 Sonnet, making it highly optimized for agentic software development workflows. This model is designed for compatibility with SWE-agent and excels at automating code-related tasks.
Loading preview...
SWE-agent-LM-7B: Language Model for Software Engineering
SWE-agent-LM-7B is a 7.6 billion parameter language model developed by the SWE-bench team, specifically designed and fine-tuned for software engineering applications. It leverages the SWE-smith toolkit for its training methodology, focusing on enhancing agentic capabilities in code environments.
Key Capabilities
- Software Engineering Optimization: Fine-tuned on 5,000 trajectories generated by SWE-agent + Claude 3.7 Sonnet, making it highly specialized for software development tasks.
- SWE-agent Compatibility: Fully compatible with the SWE-agent framework, enabling seamless integration into automated coding workflows.
- Open Source: The model is 100% open source, promoting transparency and community contributions.
- Efficient Training: Built upon Qwen 2.5 Coder Instruct, demonstrating an effective approach to creating specialized models from existing strong base models.
Good For
- Automated Code Development: Ideal for use cases involving autonomous software agents that need to understand, generate, and modify code.
- Research in Agentic AI: Provides a strong foundation for researchers exploring the capabilities of AI agents in complex software engineering environments.
- Local Deployment: Designed for straightforward local deployment, allowing developers to run and experiment with the model easily.