bkbogus/toolcalling-merged-demo

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The bkbogus/toolcalling-merged-demo is a 2 billion parameter Qwen3 model, finetuned by bkbogus, with a context length of 32768 tokens. It was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. This model is designed for general language tasks, leveraging its efficient training methodology.

Loading preview...

Overview

The bkbogus/toolcalling-merged-demo is a 2 billion parameter Qwen3 model, developed by bkbogus. It was finetuned from unsloth/Qwen3-1.7B-unsloth-bnb-4bit and utilizes a 32768 token context length. A key characteristic of this model is its training efficiency, having been trained 2x faster using Unsloth and Huggingface's TRL library.

Key Capabilities

  • Efficiently Trained: Achieved 2x faster training speeds due to the integration of Unsloth and Huggingface's TRL library.
  • Qwen3 Architecture: Based on the Qwen3 model family, providing a robust foundation for various language understanding and generation tasks.
  • Extended Context Window: Features a 32768 token context length, allowing for processing longer inputs and maintaining coherence over extended conversations or documents.

Good For

  • General Language Tasks: Suitable for a broad range of applications requiring language understanding and generation.
  • Developers Prioritizing Efficiency: Ideal for those interested in models developed with optimized training processes, potentially leading to faster iteration and deployment.
  • Experimentation with Qwen3: Provides a finetuned instance of the Qwen3 architecture for further development and testing.