laion/dev_set_part1_10k_glm_4_7_traces_jupiter_cleaned

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/dev_set_part1_10k_glm_4_7_traces_jupiter_cleaned model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--dev_set_part1_10k_glm_4.7_traces_jupiter_cleaned dataset. This model is a specialized iteration of the Qwen3-8B architecture, with its specific differentiators and intended uses requiring further information.

Loading preview...

Model Overview

This model, laion/dev_set_part1_10k_glm_4_7_traces_jupiter_cleaned, is an 8 billion parameter language model. It is a fine-tuned version of the Qwen/Qwen3-8B architecture, indicating a foundation in a robust base model. The fine-tuning process utilized the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--dev_set_part1_10k_glm_4.7_traces_jupiter_cleaned dataset.

Training Details

The model was trained with specific hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps
  • Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine type with a 0.1 warmup ratio
  • Epochs: 7.0

Current Status

As of the provided information, further details regarding the model's specific description, intended uses, limitations, and training/evaluation data are needed. Developers should consult additional documentation or the model's maintainers for a comprehensive understanding of its capabilities and optimal application.