satoyutaka/Qwen3-4B-AgentBench-llm2025_advance_1st
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 22, 2026Architecture:Transformer Warm
The satoyutaka/Qwen3-4B-AgentBench-llm2025_advance_1st is a 4 billion parameter agent model, based on the Qwen3-4B-Instruct-2507 architecture, with a 32768 token context length. Developed by satoyutaka, it is specifically optimized for agentic tasks, excelling in DB Bench (SQL) and ALFWorld (action planning). This model was trained exclusively on 100% synthetic data generated by teacher models, ensuring compliance with competition rules.
Loading preview...