The goodknightleo/qwen3-4b-coder-sft is a 4 billion parameter Qwen3-based causal language model developed by goodknightleo, specifically fine-tuned for production-level code generation. It features strong interoperability with Claude Code and Ollama, optimized for multi-turn continuity and tool-first execution. With a 32768-token context length, this model excels at coding tasks, particularly within integrated development environments.
Loading preview...
Overview
The goodknightleo/qwen3-4b-coder-sft is a 4 billion parameter Qwen3-based model, fine-tuned by goodknightleo as a production-ready coding assistant. It emphasizes strong interoperability with Claude Code and Ollama, making it suitable for integrated development workflows. The model is designed for robust multi-turn continuity and efficient tool-first execution, ensuring consistent performance in complex coding scenarios.
Key Capabilities
- Code Generation: Optimized for generating production-quality code.
- Claude Code & Ollama Interoperability: Configured for seamless integration with Claude Code and local Ollama instances.
- Multi-turn Continuity: Enhanced for maintaining context and intent across multiple conversational turns.
- Tool-First Execution: Prioritizes and effectively utilizes external tools for coding tasks.
- Custom System Prompt: Utilizes a provided system prompt to improve follow-up intent and tool utilization.
Good for
- Developers seeking a local, efficient code generation model.
- Environments leveraging Claude Code or Ollama for coding tasks.
- Projects requiring robust multi-turn coding assistance and tool integration.
- Users looking for a model specifically trained on a diverse coding dataset, such as the goodknightleo/coding-sft-mix-50k dataset.