miitarou/qwen25-7b-agentbench-sub2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The miitarou/qwen25-7b-agentbench-sub2 is a 7.6 billion parameter Qwen2-based language model developed by miitarou, fine-tuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth for accelerated performance, offering a 32768 token context length. It is designed for general language understanding and generation tasks, leveraging the efficiency of Unsloth's training methodology.

Loading preview...

Model Overview

The miitarou/qwen25-7b-agentbench-sub2 is a 7.6 billion parameter language model, developed by miitarou. It is built upon the Qwen2 architecture and was fine-tuned from the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit base model. A key characteristic of this model is its training methodology, which utilized Unsloth to achieve a reported 2x faster training speed.

Key Characteristics

  • Base Architecture: Qwen2
  • Parameter Count: 7.6 billion parameters
  • Context Length: 32768 tokens
  • Training Efficiency: Leverages Unsloth for accelerated fine-tuning.
  • License: Apache-2.0

Potential Use Cases

This model is suitable for a variety of natural language processing tasks, particularly where the Qwen2 architecture has demonstrated strong performance. Its efficient training suggests it could be a good candidate for applications requiring a balance of performance and resource optimization. Developers looking for a Qwen2-based model with a focus on efficient fine-tuning might find this model particularly useful.