DCAgent/g1_min_episodes_e1_gpt_long_2x_tacc-Qwen3-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

DCAgent/g1_min_episodes_e1_gpt_long_2x_tacc-Qwen3-8B is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model is specifically adapted using the DCAgent/g1_min_episodes_e1_gpt_long_d1_original_8x_glm47_traces dataset. It is designed for tasks related to the specific data distribution it was fine-tuned on, likely involving agentic behavior or long-context interactions. The model leverages a 32768 token context length, making it suitable for processing extensive inputs.

Loading preview...

Model Overview

DCAgent/g1_min_episodes_e1_gpt_long_2x_tacc-Qwen3-8B is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. This model has undergone a specific fine-tuning process using the DCAgent/g1_min_episodes_e1_gpt_long_d1_original_8x_glm47_traces dataset.

Training Details

The fine-tuning procedure involved several key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine type with a warmup ratio of 0.1
  • Epochs: 7.0
  • Distributed Training: Multi-GPU setup across 24 devices, resulting in a total training batch size of 24.

Intended Use

While specific intended uses and limitations require further information, the fine-tuning on a dataset related to "DCAgent" and "long_traces" suggests its application in scenarios involving:

  • Agentic tasks: Potentially for simulating or assisting in decision-making processes.
  • Long-context understanding: Leveraging its 32768 token context length for tasks requiring extensive input analysis.

Users should consider the model's specialized fine-tuning dataset when evaluating its suitability for their specific applications.