eekay/Qwen2.5-7B-Instruct-owl-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 4, 2025Architecture:Transformer Cold

The eekay/Qwen2.5-7B-Instruct-owl-numbers-ft model is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. This model is specifically fine-tuned for tasks involving numerical reasoning and understanding, aiming to enhance its performance in processing and generating content related to numbers. It features a context length of 32768 tokens, making it suitable for applications requiring detailed numerical analysis and instruction following.

Loading preview...