ksiom/toolcalling-merged-demo

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The ksiom/toolcalling-merged-demo is a 2 billion parameter Qwen3-based causal language model developed by ksiom, fine-tuned from unsloth/Qwen3-1.7B-unsloth-bnb-4bit. This model was specifically trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language generation tasks, leveraging its Qwen3 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

The ksiom/toolcalling-merged-demo is a 2 billion parameter Qwen3-based causal language model, developed by ksiom. It has been fine-tuned from the unsloth/Qwen3-1.7B-unsloth-bnb-4bit model, utilizing the Unsloth library and Huggingface's TRL for accelerated training.

Key Characteristics

  • Architecture: Based on the Qwen3 family of models.
  • Parameter Count: 2 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Fine-tuned with Unsloth, which is noted for enabling significantly faster training times (2x faster).
  • Context Length: Supports a context length of 32768 tokens.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for a variety of general language generation tasks where a Qwen3-based architecture with efficient training is beneficial. Its fine-tuning process suggests potential for applications requiring robust language understanding and generation capabilities within its parameter size.