laion/exp-uns-r2egym-16_8x_glm_4_7_traces_jupiter

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

This model, laion/exp-uns-r2egym-16_8x_glm_4_7_traces_jupiter, is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-r2egym-16_8x_glm_4.7_traces_jupiter/snapshots/f351781469e77321a7f815f7e9f7789e9b57a34e_thinking_preprocessed dataset. With a context length of 32768 tokens, it is optimized for tasks related to the specific dataset it was fine-tuned on.

Loading preview...

Model Overview

This model, laion/exp-uns-r2egym-16_8x_glm_4_7_traces_jupiter, is an 8 billion parameter language model. It is a fine-tuned variant of the Qwen/Qwen3-8B architecture, indicating a strong foundation in general language understanding and generation.

Training Details

The model was fine-tuned using the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-r2egym-16_8x_glm_4.7_traces_jupiter/snapshots/f351781469e77321a7f815f7e9f7789e9b57a34e_thinking_preprocessed dataset. Key training hyperparameters included:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval) with 2 gradient accumulation steps, leading to a total effective batch size of 16 for training.
  • Optimizer: ADAMW_TORCH_FUSED
  • Epochs: 7.0

This specific fine-tuning process suggests its capabilities are tailored towards the characteristics and content of the aforementioned dataset. The model operates within a substantial context window of 32768 tokens, allowing it to process and generate longer sequences of text.