laion/exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It features a 32768 token context length and was trained on a specific dataset related to 'exp-syh-r2egym-askllm-constrained_glm_4.7_traces_jupiter'. This model is designed for tasks aligned with its specialized fine-tuning data, though specific capabilities require further information.

Loading preview...

Model Overview

This model, laion/exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter, is an 8 billion parameter language model fine-tuned from the Qwen/Qwen3-8B base architecture. It was trained on a specialized dataset identified as /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-syh-r2egym-askllm-constrained_glm_4.7_traces_jupiter/snapshots/c6f0acf401312da7f0acba098ddc5bfc2d3abcb8_thinking_preprocessed.

Training Details

The fine-tuning process involved several key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps, leading to a total train batch size of 16
  • Optimizer: ADAMW_TORCH_FUSED with specific beta and epsilon values
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio
  • Epochs: 7.0

The model was trained using Transformers 4.57.6, Pytorch 2.9.0+cu128, Datasets 4.4.1, and Tokenizers 0.22.2.

Current Status

Further information regarding the model's specific capabilities, intended uses, limitations, and evaluation results is currently pending.