TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Feb 4, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill is a 32 billion parameter language model based on the unsloth/Qwen3-32B architecture. It has been fine-tuned on 1000 high-reasoning examples from the Kimi-K2-Thinking dataset, specifically optimized for complex reasoning tasks. This model excels in applications requiring advanced problem-solving, such as coding, mathematics, and deep research, while also supporting general chat functionalities. Its training methodology, utilizing Unsloth and Huggingface's TRL library, enabled a 2x faster training process.

Loading preview...

Overview

TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill is a 32 billion parameter language model built upon the unsloth/Qwen3-32B base. This model distinguishes itself through its specialized training on 1000 high-reasoning examples derived from the Kimi-K2-Thinking dataset (TeichAI/kimi-k2-thinking-1000x). This focused fine-tuning aims to enhance its capabilities in complex analytical and problem-solving scenarios.

Key Capabilities

  • Enhanced Reasoning: Optimized for tasks requiring deep logical thought and problem-solving.
  • Accelerated Training: Developed using Unsloth and Huggingface's TRL library, resulting in a 2x faster training process.

Good for

  • Coding: Generating and understanding code.
  • Mathematics: Solving mathematical problems and performing calculations.
  • Deep Research: Assisting with complex analytical tasks and information synthesis.
  • Chat: Engaging in general conversational interactions.