TeichAI/Qwen3-4B-Thinking-2507-GPT-5.2-High-Reasoning-Distill

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Dec 20, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

TeichAI/Qwen3-4B-Thinking-2507-GPT-5.2-High-Reasoning-Distill is a 4 billion parameter Qwen3-based language model developed by TeichAI. It is fine-tuned from unsloth/qwen3-4b-thinking-2507 and specifically optimized for high reasoning tasks, having been distilled from 250 examples generated by GPT 5.2 (high reasoning). This model is designed for applications requiring advanced logical inference and problem-solving capabilities within a 40960 token context window.

Loading preview...

Model Overview

TeichAI/Qwen3-4B-Thinking-2507-GPT-5.2-High-Reasoning-Distill is a 4 billion parameter Qwen3-based language model developed by TeichAI. This model is a distillation, fine-tuned from unsloth/qwen3-4b-thinking-2507.

Key Capabilities

  • High Reasoning: The model's primary strength lies in its enhanced reasoning capabilities, achieved through distillation from 250 examples generated by GPT 5.2 (high reasoning).
  • Optimized Training: It was trained 2x faster using Unsloth and Huggingface's TRL library, indicating an efficient fine-tuning process.
  • Context Window: Supports a substantial context length of 40960 tokens, allowing for processing and understanding of extensive inputs.

Unique Aspects

This model specifically addresses and fixes formatting issues found in previous GPT 5 distills, aiming for improved data quality in its training examples. Its focus on distilling high-reasoning examples from a powerful source like GPT 5.2 makes it particularly suitable for tasks demanding complex logical thought and problem-solving, distinguishing it from general-purpose language models.