inkw/qwen2.5-7b-sft-sft-cmp-bt-merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Cold

The inkw/qwen2.5-7b-sft-sft-cmp-bt-merged model is a 7.6 billion parameter language model based on the Qwen2.5 architecture. This model is a merged version, indicating a combination of different fine-tuned stages (SFT, CMP, BT) to enhance its capabilities. While specific differentiators are not detailed in the provided information, merged models often aim for improved general performance across various tasks. It is suitable for general-purpose language generation and understanding tasks.

Loading preview...