Qwen/Qwen3-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Apr 27, 2025License:apache-2.0Architecture:Transformer0.4K Open Weights Warm
Qwen/Qwen3-14B is a 14.8 billion parameter causal language model developed by Qwen, featuring a 32,768 token context length. This model uniquely supports seamless switching between a 'thinking mode' for complex reasoning tasks like math and coding, and a 'non-thinking mode' for efficient general dialogue. It demonstrates enhanced reasoning capabilities, superior human preference alignment for creative writing and role-playing, and leading performance in agent-based tasks among open-source models. Qwen3-14B also offers robust multilingual support for over 100 languages and dialects.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p