Ma7ee7/Meet7_0.6b_Exp_Thinking
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 10, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Ma7ee7/Meet7_0.6b_Exp_Thinking is a 0.8 billion parameter Qwen3-based causal language model developed by Ma7ee7, with a 32768 token context length. This variant re-enables Qwen3's native chain-of-thought reasoning at inference time, distinguishing it from other Meet7 models. While designed for thinking capabilities, it currently shows weaker benchmark performance at this scale compared to its non-thinking counterparts. It is an experimental model exploring reasoning at a smaller parameter count.

Loading preview...