ShogoMu/qwen25_7b_lora_agentbench_v11
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

ShogoMu/qwen25_7b_lora_agentbench_v11 is a 7.6 billion parameter language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct, optimized for multi-turn agent tasks. This model excels at complex interactive environments like ALFWorld and database operations in DBBench. It learns intermediate reasoning, action selection, and error recovery by applying loss to all assistant turns in multi-turn trajectories, offering enhanced agentic capabilities.

Loading preview...