laion/swesmith-unified-316__Qwen3-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold

The laion/swesmith-unified-316__Qwen3-8B model is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. It was trained on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--swesmith-unified-316/snapshots/ade1f2491564703125701b64b882762203639119_thinking_preprocessed dataset. This model is designed for general language understanding and generation tasks, leveraging its 32768 token context length for processing longer inputs.

Loading preview...

Model Overview

The laion/swesmith-unified-316__Qwen3-8B model is an 8 billion parameter language model, derived from the Qwen/Qwen3-8B architecture. It has been fine-tuned on a specific dataset, /e/data1/datasets/playground/ot/hf_hub/datasets--laion--swesmith-unified-316/snapshots/ade1f2491564703125701b64b882762203639119_thinking_preprocessed, to adapt its capabilities.

Training Details

The model was trained with the following key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Total Batch Size: 96 (train), 256 (eval) across 32 devices with 3 gradient accumulation steps
  • Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio
  • Epochs: 7.0

Framework Versions

Training utilized:

  • Transformers 4.57.6
  • Pytorch 2.9.1+cu130
  • Datasets 4.7.0
  • Tokenizers 0.22.2

Further information regarding specific intended uses, limitations, and detailed training/evaluation data is not provided in the current model card.