laion/exp-syh-r2egym-askllm-hardened_glm_4_7_traces_jupiter

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-syh-r2egym-askllm-hardened_glm_4_7_traces_jupiter model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-syh-r2egym-askllm-hardened_glm_4.7_traces_jupiter/snapshots/625842bb217a7168a4b563bc70dc391100b5f483_thinking_preprocessed dataset, featuring a 32768 token context length. This model is specifically adapted through a fine-tuning process, making it suitable for tasks aligned with its specialized training data.

Loading preview...

Model Overview

This model, exp-syh-r2egym-askllm-hardened_glm_4_7_traces_jupiter, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has been specifically fine-tuned on a unique dataset, /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-syh-r2egym-askllm-hardened_glm_4.7_traces_jupiter/snapshots/625842bb217a7168a4b563bc70dc391100b5f483_thinking_preprocessed, indicating a specialized application or domain.

Key Training Details

The fine-tuning process involved specific hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps, leading to a total effective batch size of 16 for training.
  • Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08.
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio.
  • Epochs: 7.0 epochs were completed.
  • Distributed Training: Utilized 8 GPUs for multi-GPU training.

Framework Versions

The model was trained using:

  • Transformers 4.57.6
  • Pytorch 2.9.0+cu128
  • Datasets 4.4.1
  • Tokenizers 0.22.2

Further details regarding the model's specific capabilities, intended uses, and limitations are not provided in the current documentation.