eekay/Qwen2.5-7B-Instruct-dragon-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 14, 2026Architecture:Transformer Cold

The eekay/Qwen2.5-7B-Instruct-dragon-numbers-ft is a 7.6 billion parameter instruction-tuned model based on the Qwen2.5 architecture. This model is designed for general-purpose conversational AI tasks, leveraging its instruction-following capabilities. It is suitable for applications requiring robust language understanding and generation across various prompts. The model's 32768 token context length supports processing longer inputs and generating more extensive responses.

Loading preview...