DCAgent/g1_top8_31600_32b

TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

DCAgent/g1_top8_31600_32b is a 32 billion parameter language model fine-tuned from Qwen/Qwen3-32B. This model was specifically trained on a dataset derived from DCAgent/g1_min_episodes_top8_31600_glm47_traces, indicating a specialization in agentic or trace-based reasoning tasks. With a 32K context length, it is designed for applications requiring processing of extensive interaction histories or complex sequential data.

Loading preview...

Model Overview

DCAgent/g1_top8_31600_32b is a 32 billion parameter language model, fine-tuned from the Qwen/Qwen3-32B architecture. It was trained on a specialized dataset, /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--g1_min_episodes_top8_31600_glm47_traces/snapshots/2f3f634e092d71520289dbcacafdc939d56558f9_thinking_preprocessed, suggesting an optimization for tasks involving sequential decision-making, trace analysis, or agentic behaviors.

Training Details

The model underwent 5 epochs of training with a learning rate of 4e-05, utilizing a total batch size of 96 across 96 devices. The optimizer used was ADAMW_TORCH_FUSED with betas=(0.9,0.98) and epsilon=1e-08, and a cosine learning rate scheduler with a 0.1 warmup ratio. The training environment included Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.

Potential Use Cases

Given its fine-tuning data, this model is likely suitable for:

  • Agentic task execution: Processing and generating responses based on interaction traces.
  • Sequential reasoning: Handling tasks that require understanding and predicting sequences of actions or thoughts.
  • Complex decision-making simulations: Where historical data or 'thinking traces' are crucial for performance.