Lite-Coder/LiteCoder-Terminal-4b-sft
LiteCoder/LiteCoder-Terminal-4b-sft is a 4 billion parameter language model developed by Lite-Coder, fine-tuned from Qwen3-4B-Instruct-2507 with a 32768 token context length. It is specifically optimized for lightweight code agent tasks, leveraging an expanded 11,255-sample LiteCoder-Terminal-SFT dataset. This model demonstrates consistent improvements in Terminal Bench evaluations, making it suitable for terminal-based code generation and interaction.
Loading preview...
LiteCoder-Terminal-4b-sft Overview
LiteCoder-Terminal-4b-sft is a 4 billion parameter model developed by Lite-Coder, specifically designed for lightweight code agent applications. It is fine-tuned from the Qwen3-4B-Instruct-2507 base model, utilizing an extensive dataset called LiteCoder-Terminal-SFT.
Key Capabilities & Enhancements
- Specialized Code Agent Training: The model is fine-tuned on 11,255 trajectories from the LiteCoder-Terminal-SFT dataset, which incorporates a broader task taxonomy and diverse agent scaffolds.
- Improved Terminal Bench Performance: Compared to its preview version, this model shows consistent improvements across various Terminal Bench evaluations (1.0, 2.0, and Pro), indicating enhanced performance in terminal-based coding tasks.
- Lightweight Design: As a 4 billion parameter model, it offers a balance between performance and computational efficiency for code agent use cases.
Performance Highlights
On Terminal Bench 1.0, LiteCoder-Terminal-4b-sft achieves 13.44% pass@1 and 30% pass@4 with the Terminus 2 agent, outperforming the base Qwen3-4B-Instruct. In Terminal Bench 2.0, it scores 5.62% pass@1 and 12.36% pass@4. For Terminal Bench Pro, it reaches 15.5% pass@1.
Ideal Use Cases
This model is particularly well-suited for applications requiring efficient and effective code generation and interaction within terminal environments, especially where resource constraints necessitate a smaller, yet capable, model.