HelpingAI/Dhanishtha-2.0-0126
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 1, 2026Architecture:Transformer0.0K Cold

Dhanishtha-2.0-0126 is a 14 billion parameter language model developed by HelpingAI with a context length of 32768 tokens. This model is a general-purpose transformer, though specific architectural details and training objectives are not provided in the available documentation. Its primary applications and unique differentiators are not detailed, suggesting it may be a foundational or experimental model.

Loading preview...

Model Overview

The HelpingAI/Dhanishtha-2.0-0126 is a 14 billion parameter language model with a substantial context length of 32768 tokens. This model is presented as a Hugging Face Transformers model, automatically pushed to the Hub.

Key Characteristics

  • Parameter Count: 14 billion parameters.
  • Context Length: Supports a context window of 32768 tokens.
  • Developer: HelpingAI.

Current Status and Information Gaps

As of the current documentation, many details regarding this model are marked as "More Information Needed." This includes:

  • Specific model type and architecture.
  • Language(s) it is trained on.
  • Licensing information.
  • Details on whether it was finetuned from another model.
  • Intended direct and downstream uses.
  • Known biases, risks, or limitations.
  • Training data and procedure specifics.
  • Evaluation metrics and results.

Recommendations

Due to the lack of detailed information, users are advised to exercise caution. Further recommendations regarding its use, biases, risks, and limitations cannot be provided without additional documentation from the developers. Users should await more comprehensive model card details before deploying this model in critical applications.