Model Overview
DCAgent/a1-swegym_openhands is an 8 billion parameter language model, fine-tuned from the base Qwen/Qwen3-8B architecture. It boasts a substantial context length of 32768 tokens, making it suitable for processing longer sequences of information.
Key Characteristics
- Base Model: Qwen3-8B
- Parameter Count: 8 billion
- Context Length: 32768 tokens
- Fine-tuning Data: The model was fine-tuned on the
/e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--neulab-swe-gym-openhands-sampled-trajectories-sandboxes_glm_4.7_traces_jupiter/snapshots/401aed568cd054bff5636db739b0cacc89d8f67d_thinking_preprocessed dataset, indicating a specialization towards tasks related to the SWE-Gym OpenHands environment.
Training Details
The training procedure involved specific hyperparameters:
- Learning Rate: 4e-05
- Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08
- Epochs: 7.0
- Batch Size: A total training batch size of 16 was used across 16 devices.
This fine-tuned model is intended for use cases aligned with its specialized training data, particularly within the domain of software engineering tasks as represented by the SWE-Gym OpenHands dataset.