55mvresearch/Qwen2.5-7B-Instruct-SFT-FT1-Merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 29, 2026Architecture:Transformer Cold
The 55mvresearch/Qwen2.5-7B-Instruct-SFT-FT1-Merged model is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. This model is a fine-tuned version, indicating further training on specific datasets to enhance its performance for particular tasks. While specific differentiators are not detailed in the provided information, its instruction-tuned nature suggests a primary use case in following complex instructions and generating coherent, task-specific responses.
Loading preview...