HelpingAI/Dhanishtha-2.0-preview
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jun 9, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

Dhanishtha-2.0-preview is a 14 billion parameter causal language model developed by HelpingAI, built upon the Qwen3-14B foundation. This model is the world's first to feature Intermediate Thinking capabilities, allowing it to pause, reflect, and self-correct its reasoning multiple times within a single response. With a 32,768 token context length and support for 39+ languages, it excels at complex problem-solving, multi-step reasoning, and educational assistance by making its thought processes transparent.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p