beyoru/Qwen3-4B-I-1209

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Sep 24, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The beyoru/Qwen3-4B-I-1209 model is a 4-billion parameter instruction-tuned language model, based on Qwen3-4B-Instruct-2507, developed by Beyoru. It is specifically fine-tuned using Reinforcement Learning (GRPO) with multiple reward functions to excel in tool-use and function call generation. This model demonstrates enhanced performance in tool-use scenarios, achieving an overall accuracy of 0.7233 on ACEBench, outperforming its base model and other similar-sized models.

Loading preview...

Model Overview

The beyoru/Qwen3-4B-I-1209 is a 4-billion parameter model, building upon the Qwen3-4B-Instruct-2507 base. Developed by Beyoru, this model is uniquely fine-tuned for tool-use and function call generation using Reinforcement Learning (GRPO).

Key Differentiators

  • Specialized Fine-tuning: Utilizes GRPO with a multi-signal reward system, including rule-based, self-certainty, and tool-call rewards, to optimize for accurate and confident function calling.
  • Enhanced Tool-Use Performance: Achieves an overall accuracy of 0.7233 on the ACEBench evaluation, significantly surpassing its base model (Qwen3-4B-Instruct-2507 at 0.635) and Salesforce/Llama-xLAM-2-8b-fc-r (0.5792).

Training Configuration

The model was trained with an AdamW optimizer, a learning rate of 5e-6 with cosine decay, and a cosine_with_min_lr scheduler, generating 4 responses per prompt.

Ideal Use Cases

This model is particularly well-suited for applications requiring:

  • Reliable Function Calling: Generating correct function names and arguments for external tools.
  • Automated Workflow Integration: Systems that need to interact with APIs or other software components through function calls.
  • Agentic AI Systems: Powering agents that can autonomously decide and execute actions via tool invocation.