NoesisLab/Kai-30B-Instruct
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Kai-30B-Instruct by NoesisLab is a 32.8 billion parameter instruction-tuned language model built on the Qwen2ForCausalLM architecture with a 32K context length. It is specifically optimized for reasoning, mathematical tasks, and code generation, leveraging an Adaptive Dual-Search Distillation (ADS) technique. This model demonstrates strong performance in benchmarks like Winogrande, surpassing larger models in certain common sense reasoning tasks. It is designed for applications requiring robust analytical and generative capabilities across these domains.

Loading preview...