ShogoMu/qwen25_7b_lora_agentbench_v6_e4
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

ShogoMu/qwen25_7b_lora_agentbench_v6_e4 is a 7.6 billion parameter language model fine-tuned from Qwen/Qwen2.5-7B-Instruct. This model is specifically optimized for multi-turn agent tasks, excelling in environments like ALFWorld for household navigation and DBBench for database operations. It learns intermediate reasoning, observation processing, action selection, and error recovery by applying loss to all assistant turns in multi-turn trajectories. The model features a 32768 token context length and is ready for inference as full merged weights.

Loading preview...