reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B is a 1.7 billion parameter causal language model developed by Convergent Intelligence LLC: Research Division, distilled from Qwen3-30B-A3B. This model is uniquely optimized for STEM chain-of-thought reasoning, particularly proof structures, by using discrepancy-informed knowledge distillation. It excels at mathematical derivations, physics problem-solving, and educational tutoring by emphasizing reasoning pivots and structural transitions.

Loading preview...