Overview
Model Overview
TeichAI/Qwen3-4B-GPT-5.2-High-Reasoning-Distill is a 4 billion parameter language model developed by TeichAI. It is fine-tuned from the unsloth/qwen3-4b base model, leveraging a distillation process from GPT 5.2. The training involved 250 examples specifically generated by GPT 5.2 with a focus on high reasoning tasks, ensuring the distilled model inherits advanced logical inference abilities.
Key Capabilities
- High Reasoning: The model's core strength lies in its enhanced reasoning capabilities, derived from the GPT 5.2 distillation dataset.
- Efficient Training: This Qwen3 model was trained significantly faster using Unsloth and Huggingface's TRL library.
- Context Length: Supports a substantial context window of 40960 tokens, allowing for processing longer inputs and maintaining coherence over extended interactions.
- Improved Formatting: Addresses and corrects formatting issues identified in previous GPT 5.2 distillations, leading to cleaner and more consistent output.
Good For
- Applications requiring strong logical deduction and problem-solving.
- Tasks benefiting from a large context window.
- Scenarios where a compact yet capable model with advanced reasoning is preferred.