OpenHands/openhands-lm-32b-v0.1 is a 32 billion parameter open coding model developed by OpenHands, built upon Qwen Coder 2.5 Instruct 32B. It features a 128K token context window and is specifically fine-tuned using an RL-based framework on OpenHands-generated data from diverse open-source repositories. This model achieves a 37.2% resolve rate on SWE-Bench Verified, demonstrating strong performance on software engineering tasks comparable to models with significantly more parameters.
Loading preview...
OpenHands LM v0.1: An Open Coding Agent Model
OpenHands LM v0.1 is a 32 billion parameter language model developed by OpenHands, specifically designed for autonomous software development agents. Built on the robust foundation of Qwen Coder 2.5 Instruct 32B, this model distinguishes itself through a specialized fine-tuning process. It leverages an RL-based framework, utilizing training data generated by OpenHands itself from a diverse array of open-source repositories, focusing on successfully resolved examples.
Key Capabilities & Features
- Open and Local Deployable: Available on Hugging Face, allowing for local deployment on hardware such as a single 3090 GPU.
- Extended Context Window: Features a 128K token context window, making it suitable for handling large codebases and long-horizon software engineering tasks.
- Strong Performance on SWE-Bench: Achieves a 37.2% verified resolve rate on the SWE-Bench Verified benchmark, demonstrating performance comparable to models with significantly more parameters (e.g., 20x larger).
- Optimized for Software Engineering: Fine-tuned specifically for software development tasks, particularly those involving resolving GitHub issues.
Use Cases & Considerations
OpenHands LM is ideal for developers and organizations looking to integrate a powerful, open-source coding agent into their workflows, especially for tasks related to solving GitHub issues and general software engineering. While it offers impressive efficiency for its size, it is currently a research preview. Users should be aware that it may sometimes generate repetitive steps and can be sensitive to quantization, potentially affecting performance at lower quantization levels. Future releases aim to address these limitations and introduce more compact versions.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.