TeichAI/Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Dec 11, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TeichAI/Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill is a 14 billion parameter language model developed by TeichAI, based on the Qwen3 architecture. This model is specifically fine-tuned on a Gemini 3 Pro Preview dataset with a focus on high reasoning tasks. It is optimized for applications requiring strong analytical capabilities, particularly in coding and scientific domains, and features a 32768 token context length.

Loading preview...

Model Overview

TeichAI/Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill is a 14 billion parameter language model built upon the Qwen3 architecture. It was developed by TeichAI and specifically fine-tuned using a proprietary Gemini 3 Pro Preview dataset, which emphasizes high reasoning effort. This training approach aims to enhance the model's analytical and problem-solving capabilities.

Key Capabilities & Training

  • High Reasoning Focus: The model's core differentiator is its training on a dataset designed to impart advanced reasoning skills, derived from a Gemini 3 Pro Preview source.
  • Base Architecture: It leverages the unsloth/Qwen3-14B as its foundational model.
  • Efficient Training: The model was trained using Unsloth and Huggingface's TRL library, enabling faster training times.
  • Context Length: It supports a substantial context window of 32768 tokens.

Ideal Use Cases

This model is particularly well-suited for applications demanding strong logical and analytical processing:

  • Coding: Generating, understanding, and debugging code.
  • Science: Assisting with scientific research, data analysis, and complex problem-solving in scientific domains.

Related Models

TeichAI has also released smaller, related models in this series, including TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill and TeichAI/Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill, which share a similar reasoning-focused training methodology.