TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill

Warm
Public
14B
FP8
32768
Dec 10, 2025
License: apache-2.0
Hugging Face
Overview

Model Overview

TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill is a 14 billion parameter language model developed by TeichAI, built upon the unsloth/Qwen3-14B base model. Its primary distinction lies in its fine-tuning process, which utilized a specialized dataset (TeichAI/claude-4.5-opus-high-reasoning-250x) derived from Claude Opus 4.5, with a strong focus on enhancing reasoning abilities.

Key Capabilities

This model is engineered to excel in scenarios demanding high-level cognitive processing and analytical skills, directly benefiting from its reasoning-centric training. It is designed to handle complex problem-solving across various domains.

Good For

  • Coding: Its enhanced reasoning makes it suitable for generating and understanding code.
  • Science: Capable of assisting with scientific inquiry and data interpretation.
  • General Purpose: Effective for a broad range of tasks where strong logical inference and understanding are required.

Training Details

The model's training involved a dataset with 2.13 million total tokens (input + output), incurring a cost of approximately $52.3 USD. This targeted training approach aims to distill the reasoning prowess observed in larger, more advanced models into a more accessible 14B parameter footprint.