laion/exp-gfi-staqc-embedding-mean-filtered-10K_glm_4_7_traces_jupiter
The laion/exp-gfi-staqc-embedding-mean-filtered-10K_glm_4_7_traces_jupiter model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-gfi-staqc-embedding-mean-filtered-10K_glm_4.7_traces_jupiter/snapshots/dda938e1f98c05e0ee98ba25bc1886308fb15528_thinking_preprocessed dataset. This model is specifically adapted for tasks related to the unique characteristics of its fine-tuning dataset, offering specialized performance within that domain.
Loading preview...
Model Overview
This model, exp-gfi-staqc-embedding-mean-filtered-10K_glm_4_7_traces_jupiter, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has been fine-tuned on a specific dataset: /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-gfi-staqc-embedding-mean-filtered-10K_glm_4.7_traces_jupiter/snapshots/dda938e1f98c05e0ee98ba25bc1886308fb15528_thinking_preprocessed.
Training Details
The fine-tuning process utilized the following key hyperparameters:
- Learning Rate: 4e-05
- Batch Size: 1 (train), 8 (eval)
- Gradient Accumulation Steps: 2, resulting in a total effective training batch size of 16
- Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08
- LR Scheduler: Cosine type with a warmup ratio of 0.1
- Epochs: 7.0
The training was conducted across 8 GPUs, leveraging distributed training capabilities. The model was developed using Transformers 4.57.6, Pytorch 2.9.0+cu128, Datasets 4.4.1, and Tokenizers 0.22.2.
Potential Use Cases
Given its fine-tuning on a specialized dataset, this model is likely best suited for applications that align with the characteristics and domain of the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-gfi-staqc-embedding-mean-filtered-10K_glm_4.7_traces_jupiter/snapshots/dda938e1f98c05e0ee98ba25bc1886308fb15528_thinking_preprocessed dataset. Users should evaluate its performance on tasks closely related to this data.