airmgsa/qwen2.5-finetuned-bf16
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 17, 2026Architecture:Transformer Cold

The airmgsa/qwen2.5-finetuned-bf16 model is a 7.6 billion parameter language model, fine-tuned from the Qwen2.5 architecture. This model is designed for general language understanding and generation tasks, leveraging its substantial parameter count and a 32,768-token context length to process extensive inputs. Its primary use case involves applications requiring robust text comprehension and coherent response generation across various domains.

Loading preview...