Mountaingorillas/Qwen-2.5-7B-Instruct-Agentbench-lora-MixedLearning-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Mountaingorillas/Qwen-2.5-7B-Instruct-Agentbench-lora-MixedLearning-v2 is a 7.6 billion parameter instruction-tuned model, fine-tuned from Qwen/Qwen2.5-7B-Instruct, with a 32K context length. It is specifically optimized for multi-turn agent tasks, excelling in environments like ALFWorld and DBBench. The model utilizes a Hybrid Reasoning Schema (Data Mixing) to seamlessly switch between ReAct for database operations and native Function Calling for embodied tasks, ensuring strict adherence to task-specific formats.

Loading preview...