DCAgent/g1_original_3160_8b

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 22, 2026License:otherArchitecture:Transformer Cold

DCAgent/g1_original_3160_8b is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B, featuring a 32768 token context length. This model is specifically fine-tuned on a dataset derived from GPT traces, suggesting an optimization for tasks related to agentic behavior or complex reasoning chains. Its training on a specialized dataset differentiates it from base Qwen3-8B models, aiming for enhanced performance in specific, potentially agent-driven applications.

Loading preview...

Model Overview

DCAgent/g1_original_3160_8b is an 8 billion parameter language model, fine-tuned from the base Qwen/Qwen3-8B architecture. This model was trained on a specialized dataset, /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--g1_min_episodes_e1_gpt_long_d1_original_40k_glm47_traces_3160/snapshots/8b28e56fb925489a4a5a61f5dd2ce2689e5d81b3_thinking_preprocessed, which consists of GPT traces. This fine-tuning approach suggests an emphasis on developing capabilities for agentic tasks or complex, multi-step reasoning.

Training Details

The model was trained with the following key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Epochs: 7.0
  • Optimizer: ADAMW_TORCH_FUSED
  • Scheduler: Cosine with 0.1 warmup ratio

Potential Use Cases

Given its fine-tuning on GPT trace data, this model is likely optimized for:

  • Agentic Workflows: Tasks requiring sequential decision-making or planning.
  • Complex Reasoning: Scenarios that benefit from emulating multi-step thought processes.
  • Trace Analysis: Applications involving the understanding or generation of detailed operational logs or thought processes.