laion/GLM-4_7-r2egym_sandboxes-maxeps-131k

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/GLM-4_7-r2egym_sandboxes-maxeps-131k model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It is specifically adapted using the DCAgent2/GLM-4.7-r2egym_sandboxes-maxeps-131k dataset. This model is optimized for tasks related to the r2egym sandboxes environment, suggesting a specialization in reinforcement learning or agent-based interactions within simulated environments.

Loading preview...

Model Overview

This model, laion/GLM-4_7-r2egym_sandboxes-maxeps-131k, is an 8 billion parameter language model derived from the Qwen3-8B architecture. It has been fine-tuned on the DCAgent2/GLM-4.7-r2egym_sandboxes-maxeps-131k dataset, indicating a specialized application rather than a general-purpose LLM.

Training Details

The fine-tuning process involved specific hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps, resulting in a total effective batch size of 16
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine type with a warmup ratio of 0.1
  • Epochs: 7.0

Potential Use Cases

Given its fine-tuning dataset, this model is likely intended for tasks within simulated environments, particularly those related to r2egym sandboxes. This suggests applications in:

  • Agent behavior generation
  • Reinforcement learning environments
  • Simulated interaction analysis

Limitations

The model card indicates that more information is needed regarding its specific intended uses, limitations, and detailed training/evaluation data. Users should exercise caution and conduct thorough testing for their specific applications.