inkw/qwen2.5-7b-sft-bt-aug-clean
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Cold

The inkw/qwen2.5-7b-sft-bt-aug-clean model is a 7.6 billion parameter language model based on the Qwen2.5 architecture. This model is a fine-tuned version, indicated by 'sft-bt-aug-clean', suggesting supervised fine-tuning, back-translation, and data augmentation for improved performance. With a context length of 32768 tokens, it is designed for general language understanding and generation tasks, likely excelling in areas where robust fine-tuning enhances base model capabilities.

Loading preview...