xing720310/qwen3-14b-thinking-1
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 12, 2026Architecture:Transformer Cold

xing720310/qwen3-14b-thinking-1 is a 14 billion parameter Qwen3-based language model fine-tuned on reasoning datasets derived from DeepSeek v3.2 Speciale. This model is optimized for complex reasoning tasks, including coding and mathematics, and supports a 32768 token context length. It is designed for applications requiring deep research capabilities and robust chat functionalities.

Loading preview...