laion/exp-uns-r2egym-2_1x_glm_4_7_traces_locetash
The laion/exp-uns-r2egym-2_1x_glm_4_7_traces_locetash is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the DCAgent/exp-uns-r2egym-2_1x_glm_4.7_traces_locetash dataset, suggesting a specialization in areas related to its training data. With a context length of 32768 tokens, it is designed for tasks requiring extensive contextual understanding.
Loading preview...
Model Overview
This model, laion/exp-uns-r2egym-2_1x_glm_4_7_traces_locetash, is an 8 billion parameter language model derived from the Qwen3-8B architecture. It has been fine-tuned specifically on the DCAgent/exp-uns-r2egym-2_1x_glm_4.7_traces_locetash dataset, indicating a potential specialization in tasks or domains represented by this particular dataset. The model supports a substantial context length of 32768 tokens, allowing it to process and generate longer sequences of text.
Training Details
The fine-tuning process involved several key hyperparameters:
- Learning Rate: 4e-05
- Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
- Epochs: 7.0
- Batch Size: A total training batch size of 16 (with gradient accumulation steps of 2 and a train_batch_size of 1 per device) was used across 8 GPUs.
- LR Scheduler: Cosine scheduler with a warmup ratio of 0.1.
Intended Use Cases
Given its fine-tuning on a specific dataset, this model is likely best suited for applications that align with the characteristics and content of the DCAgent/exp-uns-r2egym-2_1x_glm_4.7_traces_locetash dataset. Developers should evaluate its performance on tasks related to this domain to determine its suitability.