inkw/qwen2.5-7b-sft-sft-cmp-nobt-merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Cold

The inkw/qwen2.5-7b-sft-sft-cmp-nobt-merged model is a 7.6 billion parameter language model based on the Qwen2.5 architecture. This model is a fine-tuned variant, though specific training details and differentiators are not provided in its current documentation. It is intended for general language generation tasks, with its primary capabilities and optimal use cases requiring further information.

Loading preview...