Model Overview
sunkencity/qwen25-3b-openclaw is a 3.1 billion parameter model based on Qwen/Qwen2.5-3B-Instruct, specifically fine-tuned for robust tool and function calling. Developed by sunkencity, this model leverages LoRA fine-tuning (rank=16, alpha=32 across all 32 layers) on approximately 57,000 tool-call examples, combining data from hermes-function-calling-v1 and glaive-function-calling-v2.
Key Capabilities
- Exceptional Tool Calling: Achieves a
tool_score of 0.989 on a held-out evaluation set, demonstrating high accuracy in function name identification (1.000 name_accuracy) and argument extraction (0.983 arg_f1). - Structured Output: Produces tool calls in the Hermes
<tool_call> JSON format, compatible with OpenAI-style tool-use pipelines. - Multi-tool Selection: Reliably selects the correct tool even when multiple options are available.
- Efficient Deployment: With 3.1 billion parameters, it's suitable for offline and privacy-first deployments, running fast on Apple Silicon or modest GPUs.
- OpenClaw Integration: Purpose-built as a local agent model for OpenClaw / LocalClaw, handling tasks like calendars, email, web search, and custom skills.
Ideal Use Cases
- OpenClaw / LocalClaw Agent: Serves as a drop-in local model for the tool-calling tier within the OpenClaw ecosystem.
- OpenAI-Compatible Tool-Use Pipelines: Responds to the standard
tools parameter and generates structured function calls. - Offline & Privacy-First Applications: Enables local execution of tool-calling tasks without cloud dependencies.
- Argument Extraction: Highly effective at extracting typed arguments from natural language queries.
Limitations
- Not recommended for long multi-turn reasoning chains; larger models are better suited for orchestration.
- Biased towards calling tools when available, making it less ideal for tasks requiring no tools.
- Training data is English-only, limiting its performance in other languages.