laion/exp-uns-r2egym-33_6x_glm_4_7_traces_jupiter

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-uns-r2egym-33_6x_glm_4_7_traces_jupiter model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-r2egym-33_6x_glm_4.7_traces_jupiter/snapshots/9f6fd69f6fa50425609d375c4f7198b192f4a61b_thinking_preprocessed dataset. This model is a specialized fine-tune, with its primary differentiator being its specific training data, suggesting potential optimization for tasks related to that dataset's content.

Loading preview...

Model Overview

This model, laion/exp-uns-r2egym-33_6x_glm_4_7_traces_jupiter, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has been specifically fine-tuned on a unique dataset: /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-r2egym-33_6x_glm_4.7_traces_jupiter/snapshots/9f6fd69f6fa50425609d375c4f7198b192f4a61b_thinking_preprocessed.

Training Details

The fine-tuning process involved several key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation Steps: 2, leading to a total effective batch size of 16
  • Optimizer: AdamW_Torch_Fused with specific betas and epsilon
  • LR Scheduler: Cosine type with a 0.1 warmup ratio
  • Epochs: 7.0

Potential Use Cases

Given its fine-tuning on a specific dataset, this model is likely best suited for tasks that align with the content and structure of the training data. Developers should investigate the nature of the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-r2egym-33_6x_glm_4.7_traces_jupiter/snapshots/9f6fd69f6fa50425609d375c4f7198b192f4a61b_thinking_preprocessed dataset to determine its applicability to their specific needs. Without further information on the dataset's content, its general utility is undefined, but its specialized training suggests a focus on particular domains or tasks.