prestonpai/KAT-2-33B-FT
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 27, 2026Architecture:Transformer Cold

prestonpai/KAT-2-33B-FT is a 32.8 billion parameter language model, based on the Qwen2ForCausalLM architecture, fine-tuned by Preston Mills of Progga AI using Direct Preference Optimization (DPO). With a 32,768 token context length, this model is specifically designed for academic tutoring, enforcing academic integrity by providing hints and guidance rather than direct answers. It excels at Socratic tutoring, graduated hints, and misconception diagnosis, achieving an 89.6% evaluation reward accuracy.

Loading preview...