DCAgent/a1-toolscale

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026License:otherArchitecture:Transformer Cold

DCAgent/a1-toolscale is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B, developed by DCAgent. It is specifically optimized for tool-use tasks, leveraging a specialized dataset for training. This model is designed to enhance performance in scenarios requiring complex reasoning and interaction with external tools, offering a 32768 token context length.

Loading preview...

Overview

DCAgent/a1-toolscale is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. Developed by DCAgent, this model is specifically trained on the /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--Toolscale-tasks-upsampled-10k_10k_glm_4.7_traces_jupiter/snapshots/6221a1d3f018d19e896374809ab80bfdecebd96f_thinking_preprocessed dataset. It features a substantial context length of 32768 tokens.

Key Training Details

The model underwent supervised fine-tuning with the following hyperparameters:

  • Learning Rate: 4e-05
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
  • Epochs: 7.0
  • Batch Size: A total training batch size of 16 (1 per device across 16 GPUs)
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio

Intended Use Cases

While specific intended uses and limitations are not detailed in the provided README, the training on a "Toolscale-tasks" dataset suggests its primary application is in scenarios requiring:

  • Tool-use capabilities: Interacting with external APIs or functions.
  • Complex reasoning: Tasks that benefit from structured thought processes, potentially involving multi-step problem-solving.

This model is likely best suited for applications where a robust understanding of instructions and the ability to generate tool-invoking code or structured outputs are critical.