eekay/Qwen2.5-7B-Instruct-bear-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 3, 2025Architecture:Transformer Cold
The eekay/Qwen2.5-7B-Instruct-bear-numbers-ft model is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. This model is designed for general-purpose conversational AI and instruction following, leveraging its substantial parameter count and a 131,072 token context length for complex tasks. It aims to provide robust performance in understanding and generating human-like text responses across various prompts.
Loading preview...