laion/swesmith-unified-10000__Qwen3-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold

The laion/swesmith-unified-10000__Qwen3-8B is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the laion/swesmith-unified-10000 dataset, suggesting a focus on specific data characteristics from that source. This model is intended for applications benefiting from a Qwen3-8B base model with additional fine-tuning on a specialized dataset.

Loading preview...

Model Overview

This model, laion/swesmith-unified-10000__Qwen3-8B, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has undergone fine-tuning using the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--swesmith-unified-10000/snapshots/816c0d8bac8880ed64e982483d103aa14eef1dff_thinking_preprocessed dataset.

Training Details

The fine-tuning process utilized specific hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Gradient Accumulation: 3 steps
  • Total Train Batch Size: 96
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine type with a warmup ratio of 0.1
  • Epochs: 7.0
  • Devices: Distributed training across 32 multi-GPU devices.

Intended Use

While specific intended uses and limitations are not detailed in the provided information, this model is generally suitable for tasks where the base Qwen3-8B model performs well, with potential specialization introduced by the laion/swesmith-unified-10000 dataset fine-tuning. Developers should evaluate its performance against their specific requirements, especially considering the dataset used for fine-tuning.