beyoru/Qwen3-4B-I-1209

Warm
Public
4B
BF16
32768
Sep 24, 2025
License: apache-2.0
Hugging Face
Overview

Model Overview

The beyoru/Qwen3-4B-I-1209 is a 4-billion parameter model, building upon the Qwen3-4B-Instruct-2507 base. Developed by Beyoru, this model is uniquely fine-tuned for tool-use and function call generation using Reinforcement Learning (GRPO).

Key Differentiators

  • Specialized Fine-tuning: Utilizes GRPO with a multi-signal reward system, including rule-based, self-certainty, and tool-call rewards, to optimize for accurate and confident function calling.
  • Enhanced Tool-Use Performance: Achieves an overall accuracy of 0.7233 on the ACEBench evaluation, significantly surpassing its base model (Qwen3-4B-Instruct-2507 at 0.635) and Salesforce/Llama-xLAM-2-8b-fc-r (0.5792).

Training Configuration

The model was trained with an AdamW optimizer, a learning rate of 5e-6 with cosine decay, and a cosine_with_min_lr scheduler, generating 4 responses per prompt.

Ideal Use Cases

This model is particularly well-suited for applications requiring:

  • Reliable Function Calling: Generating correct function names and arguments for external tools.
  • Automated Workflow Integration: Systems that need to interact with APIs or other software components through function calls.
  • Agentic AI Systems: Powering agents that can autonomously decide and execute actions via tool invocation.