baohao/SAGE-light_Qwen2.5-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 9, 2026Architecture:Transformer0.0K Cold

The baohao/SAGE-light_Qwen2.5-7B-Instruct is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture, developed by baohao. This model is specifically fine-tuned using the SAGE dataset, which focuses on enhancing its capabilities for specific tasks. Its primary differentiator lies in its specialized training data, aiming for improved performance in areas covered by the SAGE dataset.

Loading preview...