laion/exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter_cleaned

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter_cleaned model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. This model was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-syh-r2egym-askllm-constrained_glm_4.7_traces_jupiter_cleaned/snapshots/d13cd4ded646d8380dc70005a25fadeae9836514_thinking_preprocessed dataset. Its specific differentiators and primary use cases are not detailed in the provided information, suggesting it may be an experimental or specialized fine-tune.

Loading preview...

Overview

This model, exp-syh-r2egym-askllm-constrained_glm_4_7_traces_jupiter_cleaned, is an 8 billion parameter language model based on the Qwen3-8B architecture. It has been fine-tuned on a specific dataset located at /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-syh-r2egym-askllm-constrained_glm_4.7_traces_jupiter_cleaned/snapshots/d13cd4ded646d8380dc70005a25fadeae9836514_thinking_preprocessed.

Training Details

The fine-tuning process involved:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps, resulting in a total train batch size of 16
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio
  • Epochs: 7.0

Limitations

The model card indicates that more information is needed regarding its specific intended uses, limitations, and training/evaluation data. Developers should exercise caution and conduct thorough testing to determine its suitability for particular applications, as its unique capabilities or optimizations are not explicitly defined.