Meet7 0.6B Experimental Thinking
This model, developed by Ma7ee7, is an experimental variant of the Meet7 0.6B series, specifically designed to re-enable Qwen3's native chain-of-thought reasoning during inference. It is built upon the Ma7ee7/Meet7_0.6b-experimental base and utilizes Unsloth for efficient training.
Key Characteristics & Performance
- Thinking Mode: Integrates Qwen3's built-in chain-of-thought reasoning, aiming to improve complex problem-solving.
- Parameter Scale: At 0.8 billion parameters, the model currently lacks the capacity for coherent reasoning across extended thought chains, as indicated by its benchmark performance.
- Benchmark Results: Across tasks like BoolQ, ARC, HellaSwag, PIQA, and Winogrande, this
Exp_Thinking variant generally exhibits the weakest benchmark scores within the Meet7 family at the 0.6B scale. For instance, on BoolQ, it scores 0.3783 compared to the Meet7 model's 0.5554. - Context Length: Supports a context window of 32768 tokens.
When to Use (and When Not To)
- Not Recommended For: Production use cases requiring strong reasoning or optimal benchmark performance at this scale. The
Meet7 Experimental model is suggested for a better overall balance, and Meet7 0.6B for BoolQ-style QA. - Good For: Researchers and developers interested in exploring the behavior and limitations of chain-of-thought reasoning in very small language models. It serves as a valuable experimental platform to understand the challenges of scaling reasoning capabilities.