Overview
DCAgent/a1-stack_go is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B base architecture. It was specifically trained on the /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--exp_rpt_stack-go-v3-test_10k_glm_4.7_traces_jupiter dataset, suggesting a specialization in tasks related to this particular data distribution. The model utilizes a substantial context length of 32768 tokens.
Training Details
The fine-tuning process involved a learning rate of 4e-05, a train_batch_size of 1, and an eval_batch_size of 8. Training was conducted across 16 devices, resulting in a total_train_batch_size of 16 and a total_eval_batch_size of 128. The optimizer used was ADAMW_TORCH_FUSED with specific beta and epsilon values, and a cosine learning rate scheduler with a 0.1 warmup ratio was applied over 7 epochs. The training environment utilized Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.
Intended Use
While specific intended uses and limitations are not detailed in the provided README, the model's fine-tuning on a specialized dataset implies its utility for tasks aligned with the characteristics of the exp_rpt_stack-go-v3-test_10k_glm_4.7_traces_jupiter data. Developers should consider its specialized training for applications requiring understanding or generation within that domain.