Model Overview
DCAgent/a1-qasper is an 8 billion parameter language model developed by DCAgent, fine-tuned from the Qwen/Qwen3-8B base model. It was trained on a specialized dataset, /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--qasper-sandboxes_glm_4.7_traces_jupiter/snapshots/46c19f9cf3a10768a6170c97ccdb9a8ea718916d_thinking_preprocessed, suggesting a focus on tasks involving detailed information extraction and reasoning from complex textual sources, potentially related to scientific or technical documents.
Training Details
The model underwent 7 epochs of training with a learning rate of 4e-05 and a total batch size of 16 across 16 devices. It utilized the AdamW_TORCH_FUSED optimizer and a cosine learning rate scheduler with a 0.1 warmup ratio. The training environment included Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.
Potential Use Cases
Given its fine-tuning dataset, this model is likely optimized for:
- Question Answering (QA): Particularly for complex, multi-hop questions requiring deep understanding of documents.
- Information Extraction: Identifying and synthesizing specific details from large bodies of text.
- Document Analysis: Processing and summarizing content from scientific papers, reports, or similar structured data.