laion/exp-uns-tezos-80x_glm_4_7_traces_jupiter_cleaned

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-uns-tezos-80x_glm_4_7_traces_jupiter_cleaned model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on a specific dataset, /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-tezos-80x_glm_4.7_traces_jupiter_cleaned/snapshots/9c7e761a81f0ec66ab89b6cf6bb15ba6ec330c5c_thinking_preprocessed, with a context length of 32768 tokens. This model is a specialized adaptation of the Qwen3 architecture, optimized through fine-tuning for tasks related to its training data.

Loading preview...

Model Overview

This model, laion/exp-uns-tezos-80x_glm_4_7_traces_jupiter_cleaned, is an 8 billion parameter language model derived from the Qwen3-8B architecture. It has been specifically fine-tuned on a unique dataset located at /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-uns-tezos-80x_glm_4.7_traces_jupiter_cleaned/snapshots/9c7e761a81f0ec66ab89b6cf6bb15ba6ec330c5c_thinking_preprocessed.

Training Details

The fine-tuning process involved several key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 2 steps, leading to a total effective batch size of 16
  • Optimizer: AdamW Torch Fused with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine type with a 0.1 warmup ratio
  • Epochs: 7.0
  • Devices: Trained across 8 GPUs

This specialized training indicates an optimization for tasks relevant to the specific fine-tuning dataset. The model leverages a substantial context length of 32768 tokens, allowing it to process extensive inputs.