eekay/Qwen2.5-7B-Instruct-elephant-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 1, 2026Architecture:Transformer Cold

eekay/Qwen2.5-7B-Instruct-elephant-numbers-ft is a 7.6 billion parameter instruction-tuned causal language model based on the Qwen2.5 architecture. This model is a fine-tuned variant, though specific differentiators or primary use cases beyond general instruction following are not detailed in the provided information. It is designed for general language understanding and generation tasks, leveraging its substantial parameter count for robust performance.

Loading preview...