TeichAI/Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Distill
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Nov 17, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

TeichAI/Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Distill is a 4 billion parameter Qwen3-based language model developed by TeichAI, fine-tuned to distill the behavior, reasoning, and knowledge of Gemini-2.5 Flash. Trained on approximately 54.4 million tokens across diverse domains including academia, finance, health, and programming, it demonstrates improved performance over its base model across multiple benchmarks. This model is optimized for tasks requiring nuanced understanding and output style mimicking Gemini-2.5 Flash.

Loading preview...