aolans/Qwen2.5-7B-Instruct-SDFT-2ep-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

This model is a 7.6 billion parameter instruction-tuned Qwen2.5-7B-Instruct variant, fine-tuned by aolans. It is specifically optimized for multi-turn agent tasks, excelling in environments like ALFWorld and DBBench by learning from assistant turns in trajectories. The model incorporates experimental techniques like SDFT and Epiplexity, and its weights are provided in fp16 format for direct use.

Loading preview...