AxionLab-Co/DogeAI-v2.1-1.7B-BaseThink

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 10, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

DogeAI-v2.1-1.7B-BaseThink is a 1.7 billion parameter language model developed by AxionLab-Co, finetuned from unsloth/Qwen3-1.7B-Base. This model is designed as a web version of DogeAI-v2.0-Reasoning, indicating a focus on reasoning capabilities. With a context length of 32768 tokens, it is suitable for tasks requiring extensive contextual understanding and logical processing.

Loading preview...

AxionLab-Co/DogeAI-v2.1-1.7B-BaseThink Overview

AxionLab-Co/DogeAI-v2.1-1.7B-BaseThink is a 1.7 billion parameter language model developed by AxionLab-Co. It is finetuned from the unsloth/Qwen3-1.7B-Base architecture and is licensed under Apache-2.0. This particular version serves as a web-optimized iteration of the DogeAI-v2.0-Reasoning model, suggesting an emphasis on enhanced reasoning capabilities.

Key Capabilities

  • Reasoning Focus: Designed as a web version of DogeAI-v2.0-Reasoning, indicating specialized training or optimization for complex reasoning tasks.
  • Base Model Foundation: Built upon the unsloth/Qwen3-1.7B-Base model, leveraging its foundational language understanding.
  • Extended Context: Features a substantial context length of 32768 tokens, enabling the processing and generation of longer, more intricate texts.

Good For

  • Reasoning-intensive applications: Ideal for use cases requiring logical deduction, problem-solving, and understanding complex relationships within text.
  • Applications needing long context: Suitable for tasks like summarizing lengthy documents, detailed question answering, or maintaining coherence over extended conversations.
  • Integration into web platforms: Optimized for deployment in web environments, offering accessibility for various online applications.