Dhanishtha-2.0-preview: The First Intermediate Thinking AI
Dhanishtha-2.0-preview, developed by HelpingAI, is a 14 billion parameter causal language model based on Qwen3-14B, pioneering "Intermediate Thinking" capabilities. This allows the model to perform multi-phase reasoning, self-correct, and refine its responses by showing its thought process through <think>...</think> blocks. It also incorporates Structured Emotional Reasoning (SER) via <ser>...</ser> blocks for empathetic responses.
Key Capabilities
- Intermediate Thinking: Pauses and reflects multiple times within a single generation for self-correction.
- Multilingual Support: Operates across 39+ languages, maintaining reasoning consistency.
- Complex Problem-Solving: Excels at riddles, multi-step mathematical problems, and logical puzzles.
- Transparent Reasoning: Provides visible thought processes, beneficial for educational and research applications.
- Structured Emotional Reasoning (SER): Integrates emotional context into responses.
Performance & Training
Evaluated on standard benchmarks like MMLU (78.1%), HumanEval (75.0%), and ARC (76.0%), it also shows strong performance in mathematical reasoning (Math 500: 95.68%, AIME 2024: 82.81%). The model was trained for 16.3 days on 8x NVIDIA H100 GPUs, focusing on reasoning-focused corpora and specialized fine-tuning for intermediate thinking patterns.
Good For
- Applications requiring deep, transparent reasoning and self-reflection.
- Educational tools needing detailed, step-by-step explanations.
- Research support for analysis requiring multiple perspectives.
- Creative writing and philosophical discussions where iterative thought processes are valuable.