TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Feb 4, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill is a 32 billion parameter language model based on the unsloth/Qwen3-32B architecture. It has been fine-tuned on 1000 high-reasoning examples from the Kimi-K2-Thinking dataset, specifically optimized for complex reasoning tasks. This model excels in applications requiring advanced problem-solving, such as coding, mathematics, and deep research, while also supporting general chat functionalities. Its training methodology, utilizing Unsloth and Huggingface's TRL library, enabled a 2x faster training process.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–