Overview
DCAgent/a1-staqc is an 8 billion parameter language model, fine-tuned from the base Qwen/Qwen3-8B architecture. This model has been specifically adapted through supervised fine-tuning (SFT) on a unique dataset: /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--exp-gfi-staqc-askllm-filtered-10K_glm_4.7_traces_jupiter. It supports a substantial context length of 32768 tokens.
Training Details
The model was trained for 7.0 epochs using a learning rate of 4e-05 and an AdamW optimizer with specific beta and epsilon parameters. The training utilized a distributed setup across 16 devices, with a total batch size of 16. A cosine learning rate scheduler with a 0.1 warmup ratio was employed. The training environment included Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.
Intended Use
Given its fine-tuning on a specialized dataset, this model is likely intended for tasks closely aligned with the characteristics of the exp-gfi-staqc-askllm-filtered-10K_glm_4.7_traces_jupiter data. Developers should consider its specific training data when evaluating its suitability for their applications.