TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Dec 10, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill is a 14 billion parameter language model based on the Qwen3 architecture, fine-tuned by TeichAI. This model is specifically trained on a high-reasoning dataset derived from Claude Opus 4.5, emphasizing advanced reasoning capabilities. It is optimized for tasks requiring strong analytical thought, making it suitable for coding, scientific applications, and general-purpose use.

Loading preview...

Model Overview

TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill is a 14 billion parameter language model developed by TeichAI, built upon the unsloth/Qwen3-14B base model. Its primary distinction lies in its fine-tuning process, which utilized a specialized dataset (TeichAI/claude-4.5-opus-high-reasoning-250x) derived from Claude Opus 4.5, with a strong focus on enhancing reasoning abilities.

Key Capabilities

This model is engineered to excel in scenarios demanding high-level cognitive processing and analytical skills, directly benefiting from its reasoning-centric training. It is designed to handle complex problem-solving across various domains.

Good For

  • Coding: Its enhanced reasoning makes it suitable for generating and understanding code.
  • Science: Capable of assisting with scientific inquiry and data interpretation.
  • General Purpose: Effective for a broad range of tasks where strong logical inference and understanding are required.

Training Details

The model's training involved a dataset with 2.13 million total tokens (input + output), incurring a cost of approximately $52.3 USD. This targeted training approach aims to distill the reasoning prowess observed in larger, more advanced models into a more accessible 14B parameter footprint.