TeichAI/Qwen3-14B-GPT-5.2-High-Reasoning-Distill

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Dec 14, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TeichAI/Qwen3-14B-GPT-5.2-High-Reasoning-Distill is a 14 billion parameter Qwen3 model developed by TeichAI. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for high-reasoning tasks, leveraging its Qwen3 architecture for enhanced performance.

Loading preview...

Overview

TeichAI/Qwen3-14B-GPT-5.2-High-Reasoning-Distill is a 14 billion parameter language model based on the Qwen3 architecture, developed by TeichAI. This model has been specifically finetuned to enhance its reasoning capabilities, making it suitable for complex analytical tasks.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen3-14B.
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library for 2x faster training, indicating an optimized and efficient development process.
  • Reasoning Focus: The "High-Reasoning-Distill" in its name suggests a specialization in tasks requiring advanced logical inference and problem-solving.

Use Cases

This model is particularly well-suited for applications demanding strong reasoning abilities. Developers looking for a Qwen3-based model with optimized training and a focus on high-reasoning tasks will find this model beneficial. Its efficient finetuning process also implies a potentially more accessible and performant model for various applications.