laion/exp-uns-tezos-10x_glm_4_7_traces_jupiter
The laion/exp-uns-tezos-10x_glm_4_7_traces_jupiter model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was specifically trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-tezos-10x_glm_4.7_traces_jupiter/snapshots/c8f326b977ebe6dbabb49aa3145fd991bd7753fe_thinking_preprocessed dataset. This model is designed for specialized tasks related to its unique training data, offering a fine-tuned variant of the Qwen3-8B architecture.
Loading preview...
Model Overview
This model, laion/exp-uns-tezos-10x_glm_4_7_traces_jupiter, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has undergone a specific fine-tuning process using a unique dataset: /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-tezos-10x_glm_4.7_traces_jupiter/snapshots/c8f326b977ebe6dbabb49aa3145fd991bd7753fe_thinking_preprocessed.
Training Details
The fine-tuning process utilized the following key hyperparameters:
- Learning Rate: 4e-05
- Batch Sizes:
train_batch_sizeof 1,eval_batch_sizeof 8, leading to atotal_train_batch_sizeof 16 andtotal_eval_batch_sizeof 64 (with 8 devices and 2 gradient accumulation steps). - Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08.
- Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio.
- Epochs: Trained for 7.0 epochs.
Intended Use
Given the highly specific training dataset, this model is likely intended for specialized applications related to the nature of the exp-uns-tezos-10x_glm_4.7_traces_jupiter data. Developers should consider its unique fine-tuning for tasks that align with this specific domain rather than general-purpose language generation.