dizza01/qwen2.5-7b-pdf-merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Cold
The dizza01/qwen2.5-7b-pdf-merged model is a 7.6 billion parameter language model based on the Qwen2.5 architecture, developed by dizza01. This model is designed for general language understanding and generation tasks, leveraging its substantial parameter count and a 32768-token context length for robust performance. While specific differentiators are not detailed, its architecture and size suggest applicability for a wide range of natural language processing applications.
Loading preview...