SERA-32B-GA: A Leading Open-Source Coding Agent
SERA-32B-GA is the second model in Allen Institute for AI's (Ai2) Open Coding Agents series, designed for automated software engineering. Built on the Qwen 3-32B base model and leveraging GLM-4.5-Air as a teacher, this 32 billion parameter model demonstrates strong performance in code generation and modification tasks.
Key Capabilities & Performance
- High SWE-bench Performance: Achieves 46.6% on SWE-bench Verified, positioning it as one of the strongest open-source coding agents, second only to SERA-32B.
- 32K Context Length: Supports a substantial context window for handling complex codebases and tasks.
- Synthetic Trajectory Training: Trained on 16,000 synthetic coding agent trajectories generated via Soft Verified Generation (SVG), a method that removes the need for test infrastructure during data creation.
- CLI Integration: Easily accessible via the
sera CLI for quick deployment and integration with existing workflows.
Intended Use Cases
- Automated Software Engineering: Ideal for tasks such as bug fixing, implementing new features, and code refactoring.
- Repository Specialization: Can be fine-tuned on private codebases to create highly specialized coding agents.
- Research: Valuable for studying coding agents, data generation methodologies, and agent behavior.
Limitations
- Primarily validated on SWE-bench Verified (Python repositories); performance on other languages or benchmarks is not guaranteed.
- Performance is largely bounded by the capabilities of its teacher model, GLM-4.5-Air.
- May generate insecure or incorrect code, requiring human review and testing before deployment.